2026-03-05 00:00:13.162490 | Job console starting 2026-03-05 00:00:13.187706 | Updating git repos 2026-03-05 00:00:13.832608 | Cloning repos into workspace 2026-03-05 00:00:14.364404 | Restoring repo states 2026-03-05 00:00:14.441056 | Merging changes 2026-03-05 00:00:14.441083 | Checking out repos 2026-03-05 00:00:15.492765 | Preparing playbooks 2026-03-05 00:00:17.158618 | Running Ansible setup 2026-03-05 00:00:25.879277 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-05 00:00:27.587572 | 2026-03-05 00:00:27.587692 | PLAY [Base pre] 2026-03-05 00:00:27.625733 | 2026-03-05 00:00:27.625861 | TASK [Setup log path fact] 2026-03-05 00:00:27.676518 | orchestrator | ok 2026-03-05 00:00:27.697839 | 2026-03-05 00:00:27.697955 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-05 00:00:27.792178 | orchestrator | ok 2026-03-05 00:00:27.809637 | 2026-03-05 00:00:27.809733 | TASK [emit-job-header : Print job information] 2026-03-05 00:00:27.881922 | # Job Information 2026-03-05 00:00:27.882061 | Ansible Version: 2.16.14 2026-03-05 00:00:27.882090 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-05 00:00:27.882118 | Pipeline: periodic-midnight 2026-03-05 00:00:27.882137 | Executor: 521e9411259a 2026-03-05 00:00:27.882154 | Triggered by: https://github.com/osism/testbed 2026-03-05 00:00:27.882172 | Event ID: 4d3fa243eeba40b18dad4451eb586835 2026-03-05 00:00:27.889077 | 2026-03-05 00:00:27.889167 | LOOP [emit-job-header : Print node information] 2026-03-05 00:00:28.111409 | orchestrator | ok: 2026-03-05 00:00:28.111704 | orchestrator | # Node Information 2026-03-05 00:00:28.111763 | orchestrator | Inventory Hostname: orchestrator 2026-03-05 00:00:28.111787 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-05 00:00:28.111805 | orchestrator | Username: zuul-testbed06 2026-03-05 00:00:28.111822 | orchestrator | Distro: Debian 12.13 2026-03-05 00:00:28.111912 | orchestrator | Provider: static-testbed 2026-03-05 00:00:28.111939 | orchestrator | Region: 2026-03-05 00:00:28.111958 | orchestrator | Label: testbed-orchestrator 2026-03-05 00:00:28.111975 | orchestrator | Product Name: OpenStack Nova 2026-03-05 00:00:28.111991 | orchestrator | Interface IP: 81.163.193.140 2026-03-05 00:00:28.140263 | 2026-03-05 00:00:28.140370 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-05 00:00:29.529009 | orchestrator -> localhost | changed 2026-03-05 00:00:29.535802 | 2026-03-05 00:00:29.535894 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-05 00:00:31.787450 | orchestrator -> localhost | changed 2026-03-05 00:00:31.803781 | 2026-03-05 00:00:31.803885 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-05 00:00:32.615588 | orchestrator -> localhost | ok 2026-03-05 00:00:32.621324 | 2026-03-05 00:00:32.621409 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-05 00:00:32.658567 | orchestrator | ok 2026-03-05 00:00:32.700967 | orchestrator | included: /var/lib/zuul/builds/1839e368fcb149ffb676fb798204165f/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-05 00:00:32.722390 | 2026-03-05 00:00:32.722480 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-05 00:00:36.226635 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-05 00:00:36.226814 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/1839e368fcb149ffb676fb798204165f/work/1839e368fcb149ffb676fb798204165f_id_rsa 2026-03-05 00:00:36.226997 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/1839e368fcb149ffb676fb798204165f/work/1839e368fcb149ffb676fb798204165f_id_rsa.pub 2026-03-05 00:00:36.227022 | orchestrator -> localhost | The key fingerprint is: 2026-03-05 00:00:36.227043 | orchestrator -> localhost | SHA256:4nnC80nXjs2hZmrlA93ndJ1fjJxoQwPPp3gMTzyMGyE zuul-build-sshkey 2026-03-05 00:00:36.227062 | orchestrator -> localhost | The key's randomart image is: 2026-03-05 00:00:36.227091 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-05 00:00:36.227109 | orchestrator -> localhost | | | 2026-03-05 00:00:36.227126 | orchestrator -> localhost | | E o | 2026-03-05 00:00:36.227143 | orchestrator -> localhost | | . O | 2026-03-05 00:00:36.227160 | orchestrator -> localhost | | + X . | 2026-03-05 00:00:36.227176 | orchestrator -> localhost | | . S. @ B +o| 2026-03-05 00:00:36.227197 | orchestrator -> localhost | | o o. =.X *.=| 2026-03-05 00:00:36.227214 | orchestrator -> localhost | | * o+.oo= .o| 2026-03-05 00:00:36.227230 | orchestrator -> localhost | | *.o=* .. .| 2026-03-05 00:00:36.227248 | orchestrator -> localhost | | .++o.+ | 2026-03-05 00:00:36.227265 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-05 00:00:36.227313 | orchestrator -> localhost | ok: Runtime: 0:00:01.999182 2026-03-05 00:00:36.239080 | 2026-03-05 00:00:36.239172 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-05 00:00:36.296510 | orchestrator | ok 2026-03-05 00:00:36.306825 | orchestrator | included: /var/lib/zuul/builds/1839e368fcb149ffb676fb798204165f/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-05 00:00:36.325656 | 2026-03-05 00:00:36.328139 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-05 00:00:36.381161 | orchestrator | skipping: Conditional result was False 2026-03-05 00:00:36.387698 | 2026-03-05 00:00:36.387788 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-05 00:00:37.245362 | orchestrator | changed 2026-03-05 00:00:37.261892 | 2026-03-05 00:00:37.262000 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-05 00:00:37.556783 | orchestrator | ok 2026-03-05 00:00:37.566155 | 2026-03-05 00:00:37.566261 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-05 00:00:38.061330 | orchestrator | ok 2026-03-05 00:00:38.066054 | 2026-03-05 00:00:38.066131 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-05 00:00:38.607669 | orchestrator | ok 2026-03-05 00:00:38.612655 | 2026-03-05 00:00:38.612735 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-05 00:00:38.647537 | orchestrator | skipping: Conditional result was False 2026-03-05 00:00:38.654585 | 2026-03-05 00:00:38.654703 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-05 00:00:39.654322 | orchestrator -> localhost | changed 2026-03-05 00:00:39.668667 | 2026-03-05 00:00:39.668765 | TASK [add-build-sshkey : Add back temp key] 2026-03-05 00:00:40.758532 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/1839e368fcb149ffb676fb798204165f/work/1839e368fcb149ffb676fb798204165f_id_rsa (zuul-build-sshkey) 2026-03-05 00:00:40.758725 | orchestrator -> localhost | ok: Runtime: 0:00:00.049619 2026-03-05 00:00:40.764910 | 2026-03-05 00:00:40.764994 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-05 00:00:41.198934 | orchestrator | ok 2026-03-05 00:00:41.204696 | 2026-03-05 00:00:41.204782 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-05 00:00:41.271060 | orchestrator | skipping: Conditional result was False 2026-03-05 00:00:41.378087 | 2026-03-05 00:00:41.378183 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-05 00:00:42.254013 | orchestrator | ok 2026-03-05 00:00:42.291792 | 2026-03-05 00:00:42.291895 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-05 00:00:42.353235 | orchestrator | ok 2026-03-05 00:00:42.370054 | 2026-03-05 00:00:42.370152 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-05 00:00:43.160849 | orchestrator -> localhost | ok 2026-03-05 00:00:43.166712 | 2026-03-05 00:00:43.166795 | TASK [validate-host : Collect information about the host] 2026-03-05 00:00:44.830174 | orchestrator | ok 2026-03-05 00:00:44.869975 | 2026-03-05 00:00:44.870083 | TASK [validate-host : Sanitize hostname] 2026-03-05 00:00:45.053788 | orchestrator | ok 2026-03-05 00:00:45.058358 | 2026-03-05 00:00:45.058447 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-05 00:00:46.201574 | orchestrator -> localhost | changed 2026-03-05 00:00:46.208459 | 2026-03-05 00:00:46.208558 | TASK [validate-host : Collect information about zuul worker] 2026-03-05 00:00:46.804822 | orchestrator | ok 2026-03-05 00:00:46.810547 | 2026-03-05 00:00:46.810636 | TASK [validate-host : Write out all zuul information for each host] 2026-03-05 00:00:48.178164 | orchestrator -> localhost | changed 2026-03-05 00:00:48.186665 | 2026-03-05 00:00:48.186749 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-05 00:00:48.504389 | orchestrator | ok 2026-03-05 00:00:48.509530 | 2026-03-05 00:00:48.509614 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-05 00:02:13.890127 | orchestrator | changed: 2026-03-05 00:02:13.891223 | orchestrator | .d..t...... src/ 2026-03-05 00:02:13.891296 | orchestrator | .d..t...... src/github.com/ 2026-03-05 00:02:13.891327 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-05 00:02:13.891352 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-05 00:02:13.891374 | orchestrator | RedHat.yml 2026-03-05 00:02:13.907966 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-05 00:02:13.907994 | orchestrator | RedHat.yml 2026-03-05 00:02:13.908077 | orchestrator | = 2.2.0"... 2026-03-05 00:02:24.420504 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-05 00:02:24.438007 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-03-05 00:02:24.580836 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-05 00:02:25.040057 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-05 00:02:25.101441 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-05 00:02:25.598385 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-05 00:02:25.657032 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-05 00:02:26.429681 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-05 00:02:26.429755 | orchestrator | 2026-03-05 00:02:26.429765 | orchestrator | Providers are signed by their developers. 2026-03-05 00:02:26.429773 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-05 00:02:26.429780 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-05 00:02:26.429789 | orchestrator | 2026-03-05 00:02:26.429795 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-05 00:02:26.429811 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-05 00:02:26.429817 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-05 00:02:26.429823 | orchestrator | you run "tofu init" in the future. 2026-03-05 00:02:26.430787 | orchestrator | 2026-03-05 00:02:26.431225 | orchestrator | OpenTofu has been successfully initialized! 2026-03-05 00:02:26.431239 | orchestrator | 2026-03-05 00:02:26.431271 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-05 00:02:26.431277 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-05 00:02:26.431298 | orchestrator | should now work. 2026-03-05 00:02:26.431303 | orchestrator | 2026-03-05 00:02:26.431307 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-05 00:02:26.431311 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-05 00:02:26.431380 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-05 00:02:26.590916 | orchestrator | Created and switched to workspace "ci"! 2026-03-05 00:02:26.591009 | orchestrator | 2026-03-05 00:02:26.591016 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-05 00:02:26.591022 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-05 00:02:26.591045 | orchestrator | for this configuration. 2026-03-05 00:02:26.691192 | orchestrator | ci.auto.tfvars 2026-03-05 00:02:26.778541 | orchestrator | default_custom.tf 2026-03-05 00:02:28.240265 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-05 00:02:28.898665 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-05 00:02:29.218043 | orchestrator | 2026-03-05 00:02:29.218102 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-05 00:02:29.218109 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-05 00:02:29.218153 | orchestrator | + create 2026-03-05 00:02:29.218179 | orchestrator | <= read (data resources) 2026-03-05 00:02:29.218194 | orchestrator | 2026-03-05 00:02:29.218199 | orchestrator | OpenTofu will perform the following actions: 2026-03-05 00:02:29.218320 | orchestrator | 2026-03-05 00:02:29.218335 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-05 00:02:29.218340 | orchestrator | # (config refers to values not yet known) 2026-03-05 00:02:29.218344 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-05 00:02:29.218348 | orchestrator | + checksum = (known after apply) 2026-03-05 00:02:29.218353 | orchestrator | + created_at = (known after apply) 2026-03-05 00:02:29.218357 | orchestrator | + file = (known after apply) 2026-03-05 00:02:29.218361 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.218378 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:29.218382 | orchestrator | + min_disk_gb = (known after apply) 2026-03-05 00:02:29.218386 | orchestrator | + min_ram_mb = (known after apply) 2026-03-05 00:02:29.218391 | orchestrator | + most_recent = true 2026-03-05 00:02:29.218394 | orchestrator | + name = (known after apply) 2026-03-05 00:02:29.219046 | orchestrator | + protected = (known after apply) 2026-03-05 00:02:29.219087 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.219134 | orchestrator | + schema = (known after apply) 2026-03-05 00:02:29.219162 | orchestrator | + size_bytes = (known after apply) 2026-03-05 00:02:29.219210 | orchestrator | + tags = (known after apply) 2026-03-05 00:02:29.219231 | orchestrator | + updated_at = (known after apply) 2026-03-05 00:02:29.219255 | orchestrator | } 2026-03-05 00:02:29.225505 | orchestrator | 2026-03-05 00:02:29.225540 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-05 00:02:29.225546 | orchestrator | # (config refers to values not yet known) 2026-03-05 00:02:29.225551 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-05 00:02:29.225555 | orchestrator | + checksum = (known after apply) 2026-03-05 00:02:29.225559 | orchestrator | + created_at = (known after apply) 2026-03-05 00:02:29.225564 | orchestrator | + file = (known after apply) 2026-03-05 00:02:29.225567 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.225572 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:29.225576 | orchestrator | + min_disk_gb = (known after apply) 2026-03-05 00:02:29.225580 | orchestrator | + min_ram_mb = (known after apply) 2026-03-05 00:02:29.225584 | orchestrator | + most_recent = true 2026-03-05 00:02:29.225588 | orchestrator | + name = (known after apply) 2026-03-05 00:02:29.225592 | orchestrator | + protected = (known after apply) 2026-03-05 00:02:29.225596 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.225599 | orchestrator | + schema = (known after apply) 2026-03-05 00:02:29.225603 | orchestrator | + size_bytes = (known after apply) 2026-03-05 00:02:29.225607 | orchestrator | + tags = (known after apply) 2026-03-05 00:02:29.225611 | orchestrator | + updated_at = (known after apply) 2026-03-05 00:02:29.225615 | orchestrator | } 2026-03-05 00:02:29.225629 | orchestrator | 2026-03-05 00:02:29.225634 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-05 00:02:29.225638 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-05 00:02:29.225642 | orchestrator | + content = (known after apply) 2026-03-05 00:02:29.225646 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-05 00:02:29.225650 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-05 00:02:29.225654 | orchestrator | + content_md5 = (known after apply) 2026-03-05 00:02:29.225658 | orchestrator | + content_sha1 = (known after apply) 2026-03-05 00:02:29.225661 | orchestrator | + content_sha256 = (known after apply) 2026-03-05 00:02:29.225665 | orchestrator | + content_sha512 = (known after apply) 2026-03-05 00:02:29.225669 | orchestrator | + directory_permission = "0777" 2026-03-05 00:02:29.225673 | orchestrator | + file_permission = "0644" 2026-03-05 00:02:29.225677 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-05 00:02:29.225681 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.225685 | orchestrator | } 2026-03-05 00:02:29.225689 | orchestrator | 2026-03-05 00:02:29.225693 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-05 00:02:29.225697 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-05 00:02:29.225701 | orchestrator | + content = (known after apply) 2026-03-05 00:02:29.225704 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-05 00:02:29.225708 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-05 00:02:29.225712 | orchestrator | + content_md5 = (known after apply) 2026-03-05 00:02:29.225716 | orchestrator | + content_sha1 = (known after apply) 2026-03-05 00:02:29.225720 | orchestrator | + content_sha256 = (known after apply) 2026-03-05 00:02:29.225731 | orchestrator | + content_sha512 = (known after apply) 2026-03-05 00:02:29.225735 | orchestrator | + directory_permission = "0777" 2026-03-05 00:02:29.225739 | orchestrator | + file_permission = "0644" 2026-03-05 00:02:29.225753 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-05 00:02:29.225757 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.225761 | orchestrator | } 2026-03-05 00:02:29.225765 | orchestrator | 2026-03-05 00:02:29.225769 | orchestrator | # local_file.inventory will be created 2026-03-05 00:02:29.225773 | orchestrator | + resource "local_file" "inventory" { 2026-03-05 00:02:29.225777 | orchestrator | + content = (known after apply) 2026-03-05 00:02:29.225781 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-05 00:02:29.225784 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-05 00:02:29.225788 | orchestrator | + content_md5 = (known after apply) 2026-03-05 00:02:29.225792 | orchestrator | + content_sha1 = (known after apply) 2026-03-05 00:02:29.225797 | orchestrator | + content_sha256 = (known after apply) 2026-03-05 00:02:29.225801 | orchestrator | + content_sha512 = (known after apply) 2026-03-05 00:02:29.225805 | orchestrator | + directory_permission = "0777" 2026-03-05 00:02:29.225809 | orchestrator | + file_permission = "0644" 2026-03-05 00:02:29.225813 | orchestrator | + filename = "inventory.ci" 2026-03-05 00:02:29.225817 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.225820 | orchestrator | } 2026-03-05 00:02:29.225826 | orchestrator | 2026-03-05 00:02:29.225830 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-05 00:02:29.225834 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-05 00:02:29.225838 | orchestrator | + content = (sensitive value) 2026-03-05 00:02:29.225842 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-05 00:02:29.225846 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-05 00:02:29.225849 | orchestrator | + content_md5 = (known after apply) 2026-03-05 00:02:29.225853 | orchestrator | + content_sha1 = (known after apply) 2026-03-05 00:02:29.225857 | orchestrator | + content_sha256 = (known after apply) 2026-03-05 00:02:29.225861 | orchestrator | + content_sha512 = (known after apply) 2026-03-05 00:02:29.225865 | orchestrator | + directory_permission = "0700" 2026-03-05 00:02:29.225869 | orchestrator | + file_permission = "0600" 2026-03-05 00:02:29.225873 | orchestrator | + filename = ".id_rsa.ci" 2026-03-05 00:02:29.225877 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.225880 | orchestrator | } 2026-03-05 00:02:29.225884 | orchestrator | 2026-03-05 00:02:29.225888 | orchestrator | # null_resource.node_semaphore will be created 2026-03-05 00:02:29.225892 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-05 00:02:29.225896 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.225900 | orchestrator | } 2026-03-05 00:02:29.225903 | orchestrator | 2026-03-05 00:02:29.225910 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-05 00:02:29.225916 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-05 00:02:29.225922 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:29.225928 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:29.225934 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.225939 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:29.225979 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:29.225985 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-05 00:02:29.225991 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.225997 | orchestrator | + size = 80 2026-03-05 00:02:29.226003 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:29.226009 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:29.226041 | orchestrator | } 2026-03-05 00:02:29.226047 | orchestrator | 2026-03-05 00:02:29.226053 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-05 00:02:29.226060 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-05 00:02:29.226066 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:29.226072 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:29.226079 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.226094 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:29.226101 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:29.226108 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-05 00:02:29.226115 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.226122 | orchestrator | + size = 80 2026-03-05 00:02:29.226128 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:29.226135 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:29.226142 | orchestrator | } 2026-03-05 00:02:29.226148 | orchestrator | 2026-03-05 00:02:29.226154 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-05 00:02:29.226160 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-05 00:02:29.226167 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:29.226173 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:29.226177 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.226181 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:29.226185 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:29.226188 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-05 00:02:29.226192 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.226196 | orchestrator | + size = 80 2026-03-05 00:02:29.226200 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:29.226204 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:29.226208 | orchestrator | } 2026-03-05 00:02:29.226216 | orchestrator | 2026-03-05 00:02:29.226219 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-05 00:02:29.226223 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-05 00:02:29.226227 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:29.226231 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:29.226235 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.226238 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:29.226242 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:29.226246 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-05 00:02:29.226250 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.226253 | orchestrator | + size = 80 2026-03-05 00:02:29.226262 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:29.226266 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:29.226269 | orchestrator | } 2026-03-05 00:02:29.226273 | orchestrator | 2026-03-05 00:02:29.226277 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-05 00:02:29.226281 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-05 00:02:29.226285 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:29.226288 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:29.226292 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.226296 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:29.226300 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:29.226304 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-05 00:02:29.226308 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.226311 | orchestrator | + size = 80 2026-03-05 00:02:29.226315 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:29.226319 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:29.226323 | orchestrator | } 2026-03-05 00:02:29.226327 | orchestrator | 2026-03-05 00:02:29.226330 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-05 00:02:29.226334 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-05 00:02:29.226338 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:29.226342 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:29.226346 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.226354 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:29.226358 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:29.226361 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-05 00:02:29.226365 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.226369 | orchestrator | + size = 80 2026-03-05 00:02:29.226373 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:29.226377 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:29.226380 | orchestrator | } 2026-03-05 00:02:29.226384 | orchestrator | 2026-03-05 00:02:29.226388 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-05 00:02:29.226392 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-05 00:02:29.226396 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:29.226399 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:29.226403 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.226407 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:29.226411 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:29.226416 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-05 00:02:29.226422 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.226428 | orchestrator | + size = 80 2026-03-05 00:02:29.226435 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:29.226441 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:29.226447 | orchestrator | } 2026-03-05 00:02:29.226452 | orchestrator | 2026-03-05 00:02:29.226458 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-05 00:02:29.226465 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-05 00:02:29.226471 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:29.226477 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:29.226483 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.226488 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:29.226494 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-05 00:02:29.226501 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.226507 | orchestrator | + size = 20 2026-03-05 00:02:29.226511 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:29.226515 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:29.226518 | orchestrator | } 2026-03-05 00:02:29.226522 | orchestrator | 2026-03-05 00:02:29.226526 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-05 00:02:29.226530 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-05 00:02:29.226534 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:29.226537 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:29.226541 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.226545 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:29.226549 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-05 00:02:29.226553 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.226556 | orchestrator | + size = 20 2026-03-05 00:02:29.226560 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:29.226564 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:29.226568 | orchestrator | } 2026-03-05 00:02:29.226572 | orchestrator | 2026-03-05 00:02:29.226575 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-05 00:02:29.226579 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-05 00:02:29.226583 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:29.226587 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:29.226591 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.226595 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:29.226598 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-05 00:02:29.226602 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.226610 | orchestrator | + size = 20 2026-03-05 00:02:29.226614 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:29.226618 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:29.226622 | orchestrator | } 2026-03-05 00:02:29.226629 | orchestrator | 2026-03-05 00:02:29.226633 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-05 00:02:29.226637 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-05 00:02:29.226640 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:29.226644 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:29.226648 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.226655 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:29.226659 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-05 00:02:29.226663 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.226667 | orchestrator | + size = 20 2026-03-05 00:02:29.226670 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:29.226674 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:29.226678 | orchestrator | } 2026-03-05 00:02:29.226682 | orchestrator | 2026-03-05 00:02:29.226686 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-05 00:02:29.226690 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-05 00:02:29.226693 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:29.226697 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:29.226701 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.226705 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:29.226709 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-05 00:02:29.226713 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.226717 | orchestrator | + size = 20 2026-03-05 00:02:29.226720 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:29.226724 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:29.226728 | orchestrator | } 2026-03-05 00:02:29.226732 | orchestrator | 2026-03-05 00:02:29.226736 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-05 00:02:29.226740 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-05 00:02:29.226743 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:29.226747 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:29.226751 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.226755 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:29.226759 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-05 00:02:29.226765 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.226771 | orchestrator | + size = 20 2026-03-05 00:02:29.226776 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:29.226782 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:29.226788 | orchestrator | } 2026-03-05 00:02:29.226794 | orchestrator | 2026-03-05 00:02:29.226800 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-05 00:02:29.226806 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-05 00:02:29.226812 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:29.226818 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:29.226826 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.226832 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:29.226845 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-05 00:02:29.226850 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.226854 | orchestrator | + size = 20 2026-03-05 00:02:29.226859 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:29.226863 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:29.226868 | orchestrator | } 2026-03-05 00:02:29.226872 | orchestrator | 2026-03-05 00:02:29.226877 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-05 00:02:29.226881 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-05 00:02:29.226889 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:29.226894 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:29.226899 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.226903 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:29.226907 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-05 00:02:29.226911 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.226916 | orchestrator | + size = 20 2026-03-05 00:02:29.226920 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:29.226925 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:29.226929 | orchestrator | } 2026-03-05 00:02:29.226933 | orchestrator | 2026-03-05 00:02:29.226938 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-05 00:02:29.226960 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-05 00:02:29.226966 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:29.226970 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:29.226974 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.226979 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:29.226984 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-05 00:02:29.226989 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.226993 | orchestrator | + size = 20 2026-03-05 00:02:29.226998 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:29.227002 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:29.227006 | orchestrator | } 2026-03-05 00:02:29.227011 | orchestrator | 2026-03-05 00:02:29.227015 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-05 00:02:29.227020 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-05 00:02:29.227025 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-05 00:02:29.227029 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-05 00:02:29.227034 | orchestrator | + all_metadata = (known after apply) 2026-03-05 00:02:29.227038 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:29.227042 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:29.227047 | orchestrator | + config_drive = true 2026-03-05 00:02:29.227054 | orchestrator | + created = (known after apply) 2026-03-05 00:02:29.227059 | orchestrator | + flavor_id = (known after apply) 2026-03-05 00:02:29.227064 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-05 00:02:29.227069 | orchestrator | + force_delete = false 2026-03-05 00:02:29.227073 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-05 00:02:29.227078 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.227082 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:29.227087 | orchestrator | + image_name = (known after apply) 2026-03-05 00:02:29.227097 | orchestrator | + key_pair = "testbed" 2026-03-05 00:02:29.227102 | orchestrator | + name = "testbed-manager" 2026-03-05 00:02:29.227106 | orchestrator | + power_state = "active" 2026-03-05 00:02:29.227111 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.227116 | orchestrator | + security_groups = (known after apply) 2026-03-05 00:02:29.227120 | orchestrator | + stop_before_destroy = false 2026-03-05 00:02:29.227124 | orchestrator | + updated = (known after apply) 2026-03-05 00:02:29.227129 | orchestrator | + user_data = (sensitive value) 2026-03-05 00:02:29.227134 | orchestrator | 2026-03-05 00:02:29.227139 | orchestrator | + block_device { 2026-03-05 00:02:29.227143 | orchestrator | + boot_index = 0 2026-03-05 00:02:29.227148 | orchestrator | + delete_on_termination = false 2026-03-05 00:02:29.227152 | orchestrator | + destination_type = "volume" 2026-03-05 00:02:29.227156 | orchestrator | + multiattach = false 2026-03-05 00:02:29.227159 | orchestrator | + source_type = "volume" 2026-03-05 00:02:29.227163 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:29.227170 | orchestrator | } 2026-03-05 00:02:29.227174 | orchestrator | 2026-03-05 00:02:29.227178 | orchestrator | + network { 2026-03-05 00:02:29.227182 | orchestrator | + access_network = false 2026-03-05 00:02:29.227186 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-05 00:02:29.227190 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-05 00:02:29.227193 | orchestrator | + mac = (known after apply) 2026-03-05 00:02:29.227197 | orchestrator | + name = (known after apply) 2026-03-05 00:02:29.227201 | orchestrator | + port = (known after apply) 2026-03-05 00:02:29.227205 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:29.227209 | orchestrator | } 2026-03-05 00:02:29.227212 | orchestrator | } 2026-03-05 00:02:29.227216 | orchestrator | 2026-03-05 00:02:29.227220 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-05 00:02:29.227224 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-05 00:02:29.227228 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-05 00:02:29.227232 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-05 00:02:29.227235 | orchestrator | + all_metadata = (known after apply) 2026-03-05 00:02:29.227239 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:29.227243 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:29.227247 | orchestrator | + config_drive = true 2026-03-05 00:02:29.227250 | orchestrator | + created = (known after apply) 2026-03-05 00:02:29.227254 | orchestrator | + flavor_id = (known after apply) 2026-03-05 00:02:29.227258 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-05 00:02:29.227262 | orchestrator | + force_delete = false 2026-03-05 00:02:29.227266 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-05 00:02:29.227269 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.227273 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:29.227277 | orchestrator | + image_name = (known after apply) 2026-03-05 00:02:29.227281 | orchestrator | + key_pair = "testbed" 2026-03-05 00:02:29.227285 | orchestrator | + name = "testbed-node-0" 2026-03-05 00:02:29.227288 | orchestrator | + power_state = "active" 2026-03-05 00:02:29.227292 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.227296 | orchestrator | + security_groups = (known after apply) 2026-03-05 00:02:29.227300 | orchestrator | + stop_before_destroy = false 2026-03-05 00:02:29.227304 | orchestrator | + updated = (known after apply) 2026-03-05 00:02:29.227307 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-05 00:02:29.227311 | orchestrator | 2026-03-05 00:02:29.227315 | orchestrator | + block_device { 2026-03-05 00:02:29.227319 | orchestrator | + boot_index = 0 2026-03-05 00:02:29.227323 | orchestrator | + delete_on_termination = false 2026-03-05 00:02:29.227326 | orchestrator | + destination_type = "volume" 2026-03-05 00:02:29.227330 | orchestrator | + multiattach = false 2026-03-05 00:02:29.227334 | orchestrator | + source_type = "volume" 2026-03-05 00:02:29.227338 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:29.227342 | orchestrator | } 2026-03-05 00:02:29.227346 | orchestrator | 2026-03-05 00:02:29.227349 | orchestrator | + network { 2026-03-05 00:02:29.227353 | orchestrator | + access_network = false 2026-03-05 00:02:29.227357 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-05 00:02:29.227361 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-05 00:02:29.227365 | orchestrator | + mac = (known after apply) 2026-03-05 00:02:29.227368 | orchestrator | + name = (known after apply) 2026-03-05 00:02:29.227372 | orchestrator | + port = (known after apply) 2026-03-05 00:02:29.227376 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:29.227380 | orchestrator | } 2026-03-05 00:02:29.227384 | orchestrator | } 2026-03-05 00:02:29.227389 | orchestrator | 2026-03-05 00:02:29.227393 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-05 00:02:29.227397 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-05 00:02:29.227401 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-05 00:02:29.227408 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-05 00:02:29.227412 | orchestrator | + all_metadata = (known after apply) 2026-03-05 00:02:29.227415 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:29.227419 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:29.227423 | orchestrator | + config_drive = true 2026-03-05 00:02:29.227427 | orchestrator | + created = (known after apply) 2026-03-05 00:02:29.227430 | orchestrator | + flavor_id = (known after apply) 2026-03-05 00:02:29.227434 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-05 00:02:29.227438 | orchestrator | + force_delete = false 2026-03-05 00:02:29.227442 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-05 00:02:29.227446 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.227449 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:29.227453 | orchestrator | + image_name = (known after apply) 2026-03-05 00:02:29.227457 | orchestrator | + key_pair = "testbed" 2026-03-05 00:02:29.227461 | orchestrator | + name = "testbed-node-1" 2026-03-05 00:02:29.227465 | orchestrator | + power_state = "active" 2026-03-05 00:02:29.227468 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.227472 | orchestrator | + security_groups = (known after apply) 2026-03-05 00:02:29.227476 | orchestrator | + stop_before_destroy = false 2026-03-05 00:02:29.227480 | orchestrator | + updated = (known after apply) 2026-03-05 00:02:29.227486 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-05 00:02:29.227490 | orchestrator | 2026-03-05 00:02:29.227494 | orchestrator | + block_device { 2026-03-05 00:02:29.227498 | orchestrator | + boot_index = 0 2026-03-05 00:02:29.227502 | orchestrator | + delete_on_termination = false 2026-03-05 00:02:29.227506 | orchestrator | + destination_type = "volume" 2026-03-05 00:02:29.227511 | orchestrator | + multiattach = false 2026-03-05 00:02:29.227517 | orchestrator | + source_type = "volume" 2026-03-05 00:02:29.227523 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:29.227529 | orchestrator | } 2026-03-05 00:02:29.227535 | orchestrator | 2026-03-05 00:02:29.227541 | orchestrator | + network { 2026-03-05 00:02:29.227547 | orchestrator | + access_network = false 2026-03-05 00:02:29.227553 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-05 00:02:29.227558 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-05 00:02:29.227564 | orchestrator | + mac = (known after apply) 2026-03-05 00:02:29.227570 | orchestrator | + name = (known after apply) 2026-03-05 00:02:29.227575 | orchestrator | + port = (known after apply) 2026-03-05 00:02:29.227581 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:29.227586 | orchestrator | } 2026-03-05 00:02:29.227592 | orchestrator | } 2026-03-05 00:02:29.227741 | orchestrator | 2026-03-05 00:02:29.227757 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-05 00:02:29.227763 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-05 00:02:29.227769 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-05 00:02:29.227775 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-05 00:02:29.227781 | orchestrator | + all_metadata = (known after apply) 2026-03-05 00:02:29.227787 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:29.227793 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:29.227798 | orchestrator | + config_drive = true 2026-03-05 00:02:29.227804 | orchestrator | + created = (known after apply) 2026-03-05 00:02:29.227810 | orchestrator | + flavor_id = (known after apply) 2026-03-05 00:02:29.227816 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-05 00:02:29.227823 | orchestrator | + force_delete = false 2026-03-05 00:02:29.227829 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-05 00:02:29.227835 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.227841 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:29.227855 | orchestrator | + image_name = (known after apply) 2026-03-05 00:02:29.227861 | orchestrator | + key_pair = "testbed" 2026-03-05 00:02:29.227867 | orchestrator | + name = "testbed-node-2" 2026-03-05 00:02:29.227873 | orchestrator | + power_state = "active" 2026-03-05 00:02:29.227879 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.227886 | orchestrator | + security_groups = (known after apply) 2026-03-05 00:02:29.227892 | orchestrator | + stop_before_destroy = false 2026-03-05 00:02:29.227898 | orchestrator | + updated = (known after apply) 2026-03-05 00:02:29.227905 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-05 00:02:29.227911 | orchestrator | 2026-03-05 00:02:29.227917 | orchestrator | + block_device { 2026-03-05 00:02:29.227923 | orchestrator | + boot_index = 0 2026-03-05 00:02:29.227929 | orchestrator | + delete_on_termination = false 2026-03-05 00:02:29.227936 | orchestrator | + destination_type = "volume" 2026-03-05 00:02:29.227941 | orchestrator | + multiattach = false 2026-03-05 00:02:29.227960 | orchestrator | + source_type = "volume" 2026-03-05 00:02:29.227964 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:29.227969 | orchestrator | } 2026-03-05 00:02:29.227972 | orchestrator | 2026-03-05 00:02:29.227976 | orchestrator | + network { 2026-03-05 00:02:29.227980 | orchestrator | + access_network = false 2026-03-05 00:02:29.227984 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-05 00:02:29.227988 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-05 00:02:29.227991 | orchestrator | + mac = (known after apply) 2026-03-05 00:02:29.227995 | orchestrator | + name = (known after apply) 2026-03-05 00:02:29.227999 | orchestrator | + port = (known after apply) 2026-03-05 00:02:29.228003 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:29.228006 | orchestrator | } 2026-03-05 00:02:29.228025 | orchestrator | } 2026-03-05 00:02:29.228033 | orchestrator | 2026-03-05 00:02:29.228043 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-05 00:02:29.228047 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-05 00:02:29.228050 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-05 00:02:29.228054 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-05 00:02:29.228058 | orchestrator | + all_metadata = (known after apply) 2026-03-05 00:02:29.228062 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:29.228065 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:29.228069 | orchestrator | + config_drive = true 2026-03-05 00:02:29.228073 | orchestrator | + created = (known after apply) 2026-03-05 00:02:29.228077 | orchestrator | + flavor_id = (known after apply) 2026-03-05 00:02:29.228081 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-05 00:02:29.228085 | orchestrator | + force_delete = false 2026-03-05 00:02:29.228089 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-05 00:02:29.228103 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.228107 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:29.228111 | orchestrator | + image_name = (known after apply) 2026-03-05 00:02:29.228115 | orchestrator | + key_pair = "testbed" 2026-03-05 00:02:29.228119 | orchestrator | + name = "testbed-node-3" 2026-03-05 00:02:29.228122 | orchestrator | + power_state = "active" 2026-03-05 00:02:29.228126 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.228130 | orchestrator | + security_groups = (known after apply) 2026-03-05 00:02:29.228134 | orchestrator | + stop_before_destroy = false 2026-03-05 00:02:29.228138 | orchestrator | + updated = (known after apply) 2026-03-05 00:02:29.228141 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-05 00:02:29.228145 | orchestrator | 2026-03-05 00:02:29.228149 | orchestrator | + block_device { 2026-03-05 00:02:29.228153 | orchestrator | + boot_index = 0 2026-03-05 00:02:29.228157 | orchestrator | + delete_on_termination = false 2026-03-05 00:02:29.228160 | orchestrator | + destination_type = "volume" 2026-03-05 00:02:29.228173 | orchestrator | + multiattach = false 2026-03-05 00:02:29.228176 | orchestrator | + source_type = "volume" 2026-03-05 00:02:29.228180 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:29.228184 | orchestrator | } 2026-03-05 00:02:29.228188 | orchestrator | 2026-03-05 00:02:29.228192 | orchestrator | + network { 2026-03-05 00:02:29.228195 | orchestrator | + access_network = false 2026-03-05 00:02:29.228199 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-05 00:02:29.228203 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-05 00:02:29.228207 | orchestrator | + mac = (known after apply) 2026-03-05 00:02:29.228210 | orchestrator | + name = (known after apply) 2026-03-05 00:02:29.228214 | orchestrator | + port = (known after apply) 2026-03-05 00:02:29.228218 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:29.228222 | orchestrator | } 2026-03-05 00:02:29.228226 | orchestrator | } 2026-03-05 00:02:29.234067 | orchestrator | 2026-03-05 00:02:29.234103 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-05 00:02:29.234108 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-05 00:02:29.234113 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-05 00:02:29.234117 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-05 00:02:29.234121 | orchestrator | + all_metadata = (known after apply) 2026-03-05 00:02:29.234125 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:29.234129 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:29.234133 | orchestrator | + config_drive = true 2026-03-05 00:02:29.234137 | orchestrator | + created = (known after apply) 2026-03-05 00:02:29.234141 | orchestrator | + flavor_id = (known after apply) 2026-03-05 00:02:29.234145 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-05 00:02:29.234148 | orchestrator | + force_delete = false 2026-03-05 00:02:29.234152 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-05 00:02:29.234156 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.234160 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:29.234163 | orchestrator | + image_name = (known after apply) 2026-03-05 00:02:29.234168 | orchestrator | + key_pair = "testbed" 2026-03-05 00:02:29.234171 | orchestrator | + name = "testbed-node-4" 2026-03-05 00:02:29.234175 | orchestrator | + power_state = "active" 2026-03-05 00:02:29.234179 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.234183 | orchestrator | + security_groups = (known after apply) 2026-03-05 00:02:29.234187 | orchestrator | + stop_before_destroy = false 2026-03-05 00:02:29.234191 | orchestrator | + updated = (known after apply) 2026-03-05 00:02:29.234196 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-05 00:02:29.234200 | orchestrator | 2026-03-05 00:02:29.234204 | orchestrator | + block_device { 2026-03-05 00:02:29.234208 | orchestrator | + boot_index = 0 2026-03-05 00:02:29.234212 | orchestrator | + delete_on_termination = false 2026-03-05 00:02:29.234216 | orchestrator | + destination_type = "volume" 2026-03-05 00:02:29.234220 | orchestrator | + multiattach = false 2026-03-05 00:02:29.234223 | orchestrator | + source_type = "volume" 2026-03-05 00:02:29.234227 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:29.234231 | orchestrator | } 2026-03-05 00:02:29.234235 | orchestrator | 2026-03-05 00:02:29.234239 | orchestrator | + network { 2026-03-05 00:02:29.234243 | orchestrator | + access_network = false 2026-03-05 00:02:29.234247 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-05 00:02:29.234251 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-05 00:02:29.234254 | orchestrator | + mac = (known after apply) 2026-03-05 00:02:29.234258 | orchestrator | + name = (known after apply) 2026-03-05 00:02:29.234262 | orchestrator | + port = (known after apply) 2026-03-05 00:02:29.234322 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:29.234327 | orchestrator | } 2026-03-05 00:02:29.234331 | orchestrator | } 2026-03-05 00:02:29.234346 | orchestrator | 2026-03-05 00:02:29.234350 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-05 00:02:29.234354 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-05 00:02:29.234358 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-05 00:02:29.234361 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-05 00:02:29.234365 | orchestrator | + all_metadata = (known after apply) 2026-03-05 00:02:29.234369 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:29.234373 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:29.234377 | orchestrator | + config_drive = true 2026-03-05 00:02:29.234380 | orchestrator | + created = (known after apply) 2026-03-05 00:02:29.234384 | orchestrator | + flavor_id = (known after apply) 2026-03-05 00:02:29.234388 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-05 00:02:29.234392 | orchestrator | + force_delete = false 2026-03-05 00:02:29.234396 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-05 00:02:29.234399 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.234403 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:29.234407 | orchestrator | + image_name = (known after apply) 2026-03-05 00:02:29.234411 | orchestrator | + key_pair = "testbed" 2026-03-05 00:02:29.234415 | orchestrator | + name = "testbed-node-5" 2026-03-05 00:02:29.234418 | orchestrator | + power_state = "active" 2026-03-05 00:02:29.234422 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.234426 | orchestrator | + security_groups = (known after apply) 2026-03-05 00:02:29.234430 | orchestrator | + stop_before_destroy = false 2026-03-05 00:02:29.234434 | orchestrator | + updated = (known after apply) 2026-03-05 00:02:29.234438 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-05 00:02:29.234441 | orchestrator | 2026-03-05 00:02:29.234445 | orchestrator | + block_device { 2026-03-05 00:02:29.234449 | orchestrator | + boot_index = 0 2026-03-05 00:02:29.234453 | orchestrator | + delete_on_termination = false 2026-03-05 00:02:29.234457 | orchestrator | + destination_type = "volume" 2026-03-05 00:02:29.234460 | orchestrator | + multiattach = false 2026-03-05 00:02:29.234464 | orchestrator | + source_type = "volume" 2026-03-05 00:02:29.234468 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:29.234472 | orchestrator | } 2026-03-05 00:02:29.234475 | orchestrator | 2026-03-05 00:02:29.234479 | orchestrator | + network { 2026-03-05 00:02:29.234483 | orchestrator | + access_network = false 2026-03-05 00:02:29.234487 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-05 00:02:29.234491 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-05 00:02:29.234495 | orchestrator | + mac = (known after apply) 2026-03-05 00:02:29.234498 | orchestrator | + name = (known after apply) 2026-03-05 00:02:29.234502 | orchestrator | + port = (known after apply) 2026-03-05 00:02:29.234506 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:29.234510 | orchestrator | } 2026-03-05 00:02:29.234514 | orchestrator | } 2026-03-05 00:02:29.234518 | orchestrator | 2026-03-05 00:02:29.234521 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-05 00:02:29.234525 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-05 00:02:29.234529 | orchestrator | + fingerprint = (known after apply) 2026-03-05 00:02:29.234533 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.234537 | orchestrator | + name = "testbed" 2026-03-05 00:02:29.234540 | orchestrator | + private_key = (sensitive value) 2026-03-05 00:02:29.234544 | orchestrator | + public_key = (known after apply) 2026-03-05 00:02:29.234548 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.234552 | orchestrator | + user_id = (known after apply) 2026-03-05 00:02:29.234556 | orchestrator | } 2026-03-05 00:02:29.234559 | orchestrator | 2026-03-05 00:02:29.234572 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-05 00:02:29.234576 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-05 00:02:29.234583 | orchestrator | + device = (known after apply) 2026-03-05 00:02:29.234587 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.234591 | orchestrator | + instance_id = (known after apply) 2026-03-05 00:02:29.234595 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.234603 | orchestrator | + volume_id = (known after apply) 2026-03-05 00:02:29.234607 | orchestrator | } 2026-03-05 00:02:29.234611 | orchestrator | 2026-03-05 00:02:29.234615 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-05 00:02:29.234619 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-05 00:02:29.234623 | orchestrator | + device = (known after apply) 2026-03-05 00:02:29.234626 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.234630 | orchestrator | + instance_id = (known after apply) 2026-03-05 00:02:29.234634 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.234638 | orchestrator | + volume_id = (known after apply) 2026-03-05 00:02:29.234641 | orchestrator | } 2026-03-05 00:02:29.234645 | orchestrator | 2026-03-05 00:02:29.234649 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-05 00:02:29.234653 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-05 00:02:29.234657 | orchestrator | + device = (known after apply) 2026-03-05 00:02:29.234660 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.234664 | orchestrator | + instance_id = (known after apply) 2026-03-05 00:02:29.234668 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.234672 | orchestrator | + volume_id = (known after apply) 2026-03-05 00:02:29.234675 | orchestrator | } 2026-03-05 00:02:29.234679 | orchestrator | 2026-03-05 00:02:29.234683 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-05 00:02:29.234687 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-05 00:02:29.234691 | orchestrator | + device = (known after apply) 2026-03-05 00:02:29.234694 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.234698 | orchestrator | + instance_id = (known after apply) 2026-03-05 00:02:29.234702 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.234706 | orchestrator | + volume_id = (known after apply) 2026-03-05 00:02:29.234710 | orchestrator | } 2026-03-05 00:02:29.234713 | orchestrator | 2026-03-05 00:02:29.234717 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-05 00:02:29.234721 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-05 00:02:29.234725 | orchestrator | + device = (known after apply) 2026-03-05 00:02:29.234729 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.234732 | orchestrator | + instance_id = (known after apply) 2026-03-05 00:02:29.234736 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.234740 | orchestrator | + volume_id = (known after apply) 2026-03-05 00:02:29.234744 | orchestrator | } 2026-03-05 00:02:29.234747 | orchestrator | 2026-03-05 00:02:29.234751 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-05 00:02:29.234755 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-05 00:02:29.234759 | orchestrator | + device = (known after apply) 2026-03-05 00:02:29.234765 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.234771 | orchestrator | + instance_id = (known after apply) 2026-03-05 00:02:29.234777 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.234784 | orchestrator | + volume_id = (known after apply) 2026-03-05 00:02:29.234789 | orchestrator | } 2026-03-05 00:02:29.234795 | orchestrator | 2026-03-05 00:02:29.234801 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-05 00:02:29.234807 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-05 00:02:29.234814 | orchestrator | + device = (known after apply) 2026-03-05 00:02:29.234820 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.234826 | orchestrator | + instance_id = (known after apply) 2026-03-05 00:02:29.234832 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.234842 | orchestrator | + volume_id = (known after apply) 2026-03-05 00:02:29.234848 | orchestrator | } 2026-03-05 00:02:29.234852 | orchestrator | 2026-03-05 00:02:29.234855 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-05 00:02:29.234859 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-05 00:02:29.234863 | orchestrator | + device = (known after apply) 2026-03-05 00:02:29.234867 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.234871 | orchestrator | + instance_id = (known after apply) 2026-03-05 00:02:29.234874 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.234878 | orchestrator | + volume_id = (known after apply) 2026-03-05 00:02:29.234882 | orchestrator | } 2026-03-05 00:02:29.234886 | orchestrator | 2026-03-05 00:02:29.234890 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-05 00:02:29.234893 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-05 00:02:29.234897 | orchestrator | + device = (known after apply) 2026-03-05 00:02:29.234901 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.234905 | orchestrator | + instance_id = (known after apply) 2026-03-05 00:02:29.234909 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.234912 | orchestrator | + volume_id = (known after apply) 2026-03-05 00:02:29.234916 | orchestrator | } 2026-03-05 00:02:29.234920 | orchestrator | 2026-03-05 00:02:29.234924 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-05 00:02:29.234928 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-05 00:02:29.234932 | orchestrator | + fixed_ip = (known after apply) 2026-03-05 00:02:29.234936 | orchestrator | + floating_ip = (known after apply) 2026-03-05 00:02:29.234940 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.234977 | orchestrator | + port_id = (known after apply) 2026-03-05 00:02:29.234981 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.234985 | orchestrator | } 2026-03-05 00:02:29.234989 | orchestrator | 2026-03-05 00:02:29.234993 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-05 00:02:29.234997 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-05 00:02:29.235001 | orchestrator | + address = (known after apply) 2026-03-05 00:02:29.235004 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:29.235011 | orchestrator | + dns_domain = (known after apply) 2026-03-05 00:02:29.235015 | orchestrator | + dns_name = (known after apply) 2026-03-05 00:02:29.235024 | orchestrator | + fixed_ip = (known after apply) 2026-03-05 00:02:29.235027 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.235031 | orchestrator | + pool = "public" 2026-03-05 00:02:29.235035 | orchestrator | + port_id = (known after apply) 2026-03-05 00:02:29.235039 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.235043 | orchestrator | + subnet_id = (known after apply) 2026-03-05 00:02:29.235047 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:29.235051 | orchestrator | } 2026-03-05 00:02:29.235054 | orchestrator | 2026-03-05 00:02:29.235058 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-05 00:02:29.235062 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-05 00:02:29.235066 | orchestrator | + admin_state_up = (known after apply) 2026-03-05 00:02:29.235070 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:29.235073 | orchestrator | + availability_zone_hints = [ 2026-03-05 00:02:29.235077 | orchestrator | + "nova", 2026-03-05 00:02:29.235081 | orchestrator | ] 2026-03-05 00:02:29.235085 | orchestrator | + dns_domain = (known after apply) 2026-03-05 00:02:29.235089 | orchestrator | + external = (known after apply) 2026-03-05 00:02:29.235093 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.235096 | orchestrator | + mtu = (known after apply) 2026-03-05 00:02:29.235100 | orchestrator | + name = "net-testbed-management" 2026-03-05 00:02:29.235104 | orchestrator | + port_security_enabled = (known after apply) 2026-03-05 00:02:29.235111 | orchestrator | + qos_policy_id = (known after apply) 2026-03-05 00:02:29.235115 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.235118 | orchestrator | + shared = (known after apply) 2026-03-05 00:02:29.235122 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:29.235126 | orchestrator | + transparent_vlan = (known after apply) 2026-03-05 00:02:29.235130 | orchestrator | 2026-03-05 00:02:29.235133 | orchestrator | + segments (known after apply) 2026-03-05 00:02:29.235137 | orchestrator | } 2026-03-05 00:02:29.235141 | orchestrator | 2026-03-05 00:02:29.235145 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-05 00:02:29.235148 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-05 00:02:29.235152 | orchestrator | + admin_state_up = (known after apply) 2026-03-05 00:02:29.235156 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-05 00:02:29.235160 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-05 00:02:29.235164 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:29.235167 | orchestrator | + device_id = (known after apply) 2026-03-05 00:02:29.235171 | orchestrator | + device_owner = (known after apply) 2026-03-05 00:02:29.235175 | orchestrator | + dns_assignment = (known after apply) 2026-03-05 00:02:29.235178 | orchestrator | + dns_name = (known after apply) 2026-03-05 00:02:29.235182 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.235186 | orchestrator | + mac_address = (known after apply) 2026-03-05 00:02:29.235190 | orchestrator | + network_id = (known after apply) 2026-03-05 00:02:29.235193 | orchestrator | + port_security_enabled = (known after apply) 2026-03-05 00:02:29.235197 | orchestrator | + qos_policy_id = (known after apply) 2026-03-05 00:02:29.235201 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.235205 | orchestrator | + security_group_ids = (known after apply) 2026-03-05 00:02:29.235208 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:29.235212 | orchestrator | 2026-03-05 00:02:29.235216 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:29.235220 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-05 00:02:29.235223 | orchestrator | } 2026-03-05 00:02:29.235227 | orchestrator | 2026-03-05 00:02:29.235231 | orchestrator | + binding (known after apply) 2026-03-05 00:02:29.235235 | orchestrator | 2026-03-05 00:02:29.235239 | orchestrator | + fixed_ip { 2026-03-05 00:02:29.235242 | orchestrator | + ip_address = "192.168.16.5" 2026-03-05 00:02:29.235246 | orchestrator | + subnet_id = (known after apply) 2026-03-05 00:02:29.235250 | orchestrator | } 2026-03-05 00:02:29.235254 | orchestrator | } 2026-03-05 00:02:29.235258 | orchestrator | 2026-03-05 00:02:29.235261 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-05 00:02:29.235265 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-05 00:02:29.235269 | orchestrator | + admin_state_up = (known after apply) 2026-03-05 00:02:29.235273 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-05 00:02:29.235277 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-05 00:02:29.235280 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:29.235284 | orchestrator | + device_id = (known after apply) 2026-03-05 00:02:29.235288 | orchestrator | + device_owner = (known after apply) 2026-03-05 00:02:29.235291 | orchestrator | + dns_assignment = (known after apply) 2026-03-05 00:02:29.235295 | orchestrator | + dns_name = (known after apply) 2026-03-05 00:02:29.235299 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.235303 | orchestrator | + mac_address = (known after apply) 2026-03-05 00:02:29.235306 | orchestrator | + network_id = (known after apply) 2026-03-05 00:02:29.235310 | orchestrator | + port_security_enabled = (known after apply) 2026-03-05 00:02:29.235314 | orchestrator | + qos_policy_id = (known after apply) 2026-03-05 00:02:29.235318 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.235327 | orchestrator | + security_group_ids = (known after apply) 2026-03-05 00:02:29.235331 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:29.235335 | orchestrator | 2026-03-05 00:02:29.235339 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:29.235343 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-05 00:02:29.235346 | orchestrator | } 2026-03-05 00:02:29.235350 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:29.235354 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-05 00:02:29.235358 | orchestrator | } 2026-03-05 00:02:29.235362 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:29.235366 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-05 00:02:29.235369 | orchestrator | } 2026-03-05 00:02:29.235373 | orchestrator | 2026-03-05 00:02:29.235377 | orchestrator | + binding (known after apply) 2026-03-05 00:02:29.235381 | orchestrator | 2026-03-05 00:02:29.235384 | orchestrator | + fixed_ip { 2026-03-05 00:02:29.235388 | orchestrator | + ip_address = "192.168.16.10" 2026-03-05 00:02:29.235392 | orchestrator | + subnet_id = (known after apply) 2026-03-05 00:02:29.235396 | orchestrator | } 2026-03-05 00:02:29.235400 | orchestrator | } 2026-03-05 00:02:29.235403 | orchestrator | 2026-03-05 00:02:29.235407 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-05 00:02:29.235411 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-05 00:02:29.235417 | orchestrator | + admin_state_up = (known after apply) 2026-03-05 00:02:29.235421 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-05 00:02:29.235429 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-05 00:02:29.235433 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:29.235436 | orchestrator | + device_id = (known after apply) 2026-03-05 00:02:29.235440 | orchestrator | + device_owner = (known after apply) 2026-03-05 00:02:29.235444 | orchestrator | + dns_assignment = (known after apply) 2026-03-05 00:02:29.235448 | orchestrator | + dns_name = (known after apply) 2026-03-05 00:02:29.235452 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.235455 | orchestrator | + mac_address = (known after apply) 2026-03-05 00:02:29.235459 | orchestrator | + network_id = (known after apply) 2026-03-05 00:02:29.235463 | orchestrator | + port_security_enabled = (known after apply) 2026-03-05 00:02:29.235467 | orchestrator | + qos_policy_id = (known after apply) 2026-03-05 00:02:29.235470 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.235474 | orchestrator | + security_group_ids = (known after apply) 2026-03-05 00:02:29.235478 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:29.235482 | orchestrator | 2026-03-05 00:02:29.235486 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:29.235489 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-05 00:02:29.235493 | orchestrator | } 2026-03-05 00:02:29.235497 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:29.235501 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-05 00:02:29.235505 | orchestrator | } 2026-03-05 00:02:29.235509 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:29.235512 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-05 00:02:29.235516 | orchestrator | } 2026-03-05 00:02:29.235520 | orchestrator | 2026-03-05 00:02:29.235524 | orchestrator | + binding (known after apply) 2026-03-05 00:02:29.235527 | orchestrator | 2026-03-05 00:02:29.235531 | orchestrator | + fixed_ip { 2026-03-05 00:02:29.235535 | orchestrator | + ip_address = "192.168.16.11" 2026-03-05 00:02:29.235539 | orchestrator | + subnet_id = (known after apply) 2026-03-05 00:02:29.235542 | orchestrator | } 2026-03-05 00:02:29.235546 | orchestrator | } 2026-03-05 00:02:29.235550 | orchestrator | 2026-03-05 00:02:29.235554 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-05 00:02:29.235558 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-05 00:02:29.235561 | orchestrator | + admin_state_up = (known after apply) 2026-03-05 00:02:29.235565 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-05 00:02:29.235569 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-05 00:02:29.235573 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:29.235580 | orchestrator | + device_id = (known after apply) 2026-03-05 00:02:29.235583 | orchestrator | + device_owner = (known after apply) 2026-03-05 00:02:29.235587 | orchestrator | + dns_assignment = (known after apply) 2026-03-05 00:02:29.235591 | orchestrator | + dns_name = (known after apply) 2026-03-05 00:02:29.235595 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.235598 | orchestrator | + mac_address = (known after apply) 2026-03-05 00:02:29.235602 | orchestrator | + network_id = (known after apply) 2026-03-05 00:02:29.235606 | orchestrator | + port_security_enabled = (known after apply) 2026-03-05 00:02:29.235610 | orchestrator | + qos_policy_id = (known after apply) 2026-03-05 00:02:29.235614 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.235617 | orchestrator | + security_group_ids = (known after apply) 2026-03-05 00:02:29.235621 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:29.235625 | orchestrator | 2026-03-05 00:02:29.235629 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:29.235632 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-05 00:02:29.235636 | orchestrator | } 2026-03-05 00:02:29.235640 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:29.235644 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-05 00:02:29.235648 | orchestrator | } 2026-03-05 00:02:29.235651 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:29.235655 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-05 00:02:29.235659 | orchestrator | } 2026-03-05 00:02:29.235663 | orchestrator | 2026-03-05 00:02:29.235667 | orchestrator | + binding (known after apply) 2026-03-05 00:02:29.235670 | orchestrator | 2026-03-05 00:02:29.235674 | orchestrator | + fixed_ip { 2026-03-05 00:02:29.235678 | orchestrator | + ip_address = "192.168.16.12" 2026-03-05 00:02:29.235682 | orchestrator | + subnet_id = (known after apply) 2026-03-05 00:02:29.235686 | orchestrator | } 2026-03-05 00:02:29.235689 | orchestrator | } 2026-03-05 00:02:29.235693 | orchestrator | 2026-03-05 00:02:29.235697 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-05 00:02:29.235701 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-05 00:02:29.235705 | orchestrator | + admin_state_up = (known after apply) 2026-03-05 00:02:29.235708 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-05 00:02:29.235712 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-05 00:02:29.235716 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:29.235720 | orchestrator | + device_id = (known after apply) 2026-03-05 00:02:29.235723 | orchestrator | + device_owner = (known after apply) 2026-03-05 00:02:29.235727 | orchestrator | + dns_assignment = (known after apply) 2026-03-05 00:02:29.235731 | orchestrator | + dns_name = (known after apply) 2026-03-05 00:02:29.235735 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.235738 | orchestrator | + mac_address = (known after apply) 2026-03-05 00:02:29.235742 | orchestrator | + network_id = (known after apply) 2026-03-05 00:02:29.235746 | orchestrator | + port_security_enabled = (known after apply) 2026-03-05 00:02:29.235750 | orchestrator | + qos_policy_id = (known after apply) 2026-03-05 00:02:29.235754 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.235757 | orchestrator | + security_group_ids = (known after apply) 2026-03-05 00:02:29.235761 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:29.235765 | orchestrator | 2026-03-05 00:02:29.235769 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:29.235772 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-05 00:02:29.235776 | orchestrator | } 2026-03-05 00:02:29.235780 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:29.235784 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-05 00:02:29.235788 | orchestrator | } 2026-03-05 00:02:29.235791 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:29.235795 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-05 00:02:29.235799 | orchestrator | } 2026-03-05 00:02:29.235802 | orchestrator | 2026-03-05 00:02:29.235809 | orchestrator | + binding (known after apply) 2026-03-05 00:02:29.235813 | orchestrator | 2026-03-05 00:02:29.235817 | orchestrator | + fixed_ip { 2026-03-05 00:02:29.235820 | orchestrator | + ip_address = "192.168.16.13" 2026-03-05 00:02:29.235824 | orchestrator | + subnet_id = (known after apply) 2026-03-05 00:02:29.235828 | orchestrator | } 2026-03-05 00:02:29.235832 | orchestrator | } 2026-03-05 00:02:29.235836 | orchestrator | 2026-03-05 00:02:29.235839 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-05 00:02:29.235846 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-05 00:02:29.235850 | orchestrator | + admin_state_up = (known after apply) 2026-03-05 00:02:29.235854 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-05 00:02:29.235858 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-05 00:02:29.235861 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:29.235865 | orchestrator | + device_id = (known after apply) 2026-03-05 00:02:29.235869 | orchestrator | + device_owner = (known after apply) 2026-03-05 00:02:29.235873 | orchestrator | + dns_assignment = (known after apply) 2026-03-05 00:02:29.235876 | orchestrator | + dns_name = (known after apply) 2026-03-05 00:02:29.235882 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.235886 | orchestrator | + mac_address = (known after apply) 2026-03-05 00:02:29.235890 | orchestrator | + network_id = (known after apply) 2026-03-05 00:02:29.235894 | orchestrator | + port_security_enabled = (known after apply) 2026-03-05 00:02:29.235898 | orchestrator | + qos_policy_id = (known after apply) 2026-03-05 00:02:29.235902 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.235905 | orchestrator | + security_group_ids = (known after apply) 2026-03-05 00:02:29.235909 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:29.235915 | orchestrator | 2026-03-05 00:02:29.235918 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:29.235924 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-05 00:02:29.235928 | orchestrator | } 2026-03-05 00:02:29.235932 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:29.235936 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-05 00:02:29.235940 | orchestrator | } 2026-03-05 00:02:29.235953 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:29.235957 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-05 00:02:29.235961 | orchestrator | } 2026-03-05 00:02:29.235964 | orchestrator | 2026-03-05 00:02:29.235968 | orchestrator | + binding (known after apply) 2026-03-05 00:02:29.235972 | orchestrator | 2026-03-05 00:02:29.235976 | orchestrator | + fixed_ip { 2026-03-05 00:02:29.235980 | orchestrator | + ip_address = "192.168.16.14" 2026-03-05 00:02:29.235983 | orchestrator | + subnet_id = (known after apply) 2026-03-05 00:02:29.235987 | orchestrator | } 2026-03-05 00:02:29.235991 | orchestrator | } 2026-03-05 00:02:29.235995 | orchestrator | 2026-03-05 00:02:29.235999 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-05 00:02:29.236002 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-05 00:02:29.236006 | orchestrator | + admin_state_up = (known after apply) 2026-03-05 00:02:29.236010 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-05 00:02:29.236014 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-05 00:02:29.236018 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:29.236021 | orchestrator | + device_id = (known after apply) 2026-03-05 00:02:29.236025 | orchestrator | + device_owner = (known after apply) 2026-03-05 00:02:29.236029 | orchestrator | + dns_assignment = (known after apply) 2026-03-05 00:02:29.236033 | orchestrator | + dns_name = (known after apply) 2026-03-05 00:02:29.236036 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.236040 | orchestrator | + mac_address = (known after apply) 2026-03-05 00:02:29.236044 | orchestrator | + network_id = (known after apply) 2026-03-05 00:02:29.236048 | orchestrator | + port_security_enabled = (known after apply) 2026-03-05 00:02:29.236051 | orchestrator | + qos_policy_id = (known after apply) 2026-03-05 00:02:29.236058 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.236062 | orchestrator | + security_group_ids = (known after apply) 2026-03-05 00:02:29.236066 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:29.236069 | orchestrator | 2026-03-05 00:02:29.236073 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:29.236077 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-05 00:02:29.236081 | orchestrator | } 2026-03-05 00:02:29.236084 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:29.236088 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-05 00:02:29.236092 | orchestrator | } 2026-03-05 00:02:29.236096 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:29.236100 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-05 00:02:29.236103 | orchestrator | } 2026-03-05 00:02:29.236107 | orchestrator | 2026-03-05 00:02:29.236111 | orchestrator | + binding (known after apply) 2026-03-05 00:02:29.236115 | orchestrator | 2026-03-05 00:02:29.236119 | orchestrator | + fixed_ip { 2026-03-05 00:02:29.236122 | orchestrator | + ip_address = "192.168.16.15" 2026-03-05 00:02:29.236126 | orchestrator | + subnet_id = (known after apply) 2026-03-05 00:02:29.236130 | orchestrator | } 2026-03-05 00:02:29.236134 | orchestrator | } 2026-03-05 00:02:29.236137 | orchestrator | 2026-03-05 00:02:29.236141 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-05 00:02:29.236145 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-05 00:02:29.236149 | orchestrator | + force_destroy = false 2026-03-05 00:02:29.236153 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.236156 | orchestrator | + port_id = (known after apply) 2026-03-05 00:02:29.236160 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.236164 | orchestrator | + router_id = (known after apply) 2026-03-05 00:02:29.236168 | orchestrator | + subnet_id = (known after apply) 2026-03-05 00:02:29.236171 | orchestrator | } 2026-03-05 00:02:29.236175 | orchestrator | 2026-03-05 00:02:29.236179 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-05 00:02:29.236183 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-05 00:02:29.236187 | orchestrator | + admin_state_up = (known after apply) 2026-03-05 00:02:29.236190 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:29.236194 | orchestrator | + availability_zone_hints = [ 2026-03-05 00:02:29.236198 | orchestrator | + "nova", 2026-03-05 00:02:29.236202 | orchestrator | ] 2026-03-05 00:02:29.236205 | orchestrator | + distributed = (known after apply) 2026-03-05 00:02:29.236209 | orchestrator | + enable_snat = (known after apply) 2026-03-05 00:02:29.236213 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-05 00:02:29.236217 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-05 00:02:29.236220 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.236224 | orchestrator | + name = "testbed" 2026-03-05 00:02:29.236228 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.236232 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:29.236235 | orchestrator | 2026-03-05 00:02:29.236239 | orchestrator | + external_fixed_ip (known after apply) 2026-03-05 00:02:29.236243 | orchestrator | } 2026-03-05 00:02:29.236247 | orchestrator | 2026-03-05 00:02:29.236251 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-05 00:02:29.236255 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-05 00:02:29.236262 | orchestrator | + description = "ssh" 2026-03-05 00:02:29.236265 | orchestrator | + direction = "ingress" 2026-03-05 00:02:29.236269 | orchestrator | + ethertype = "IPv4" 2026-03-05 00:02:29.236273 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.236277 | orchestrator | + port_range_max = 22 2026-03-05 00:02:29.236281 | orchestrator | + port_range_min = 22 2026-03-05 00:02:29.236285 | orchestrator | + protocol = "tcp" 2026-03-05 00:02:29.236288 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.236295 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-05 00:02:29.236299 | orchestrator | + remote_group_id = (known after apply) 2026-03-05 00:02:29.236302 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-05 00:02:29.236306 | orchestrator | + security_group_id = (known after apply) 2026-03-05 00:02:29.236310 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:29.236314 | orchestrator | } 2026-03-05 00:02:29.236318 | orchestrator | 2026-03-05 00:02:29.236321 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-05 00:02:29.236325 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-05 00:02:29.236329 | orchestrator | + description = "wireguard" 2026-03-05 00:02:29.236333 | orchestrator | + direction = "ingress" 2026-03-05 00:02:29.236337 | orchestrator | + ethertype = "IPv4" 2026-03-05 00:02:29.236341 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.236344 | orchestrator | + port_range_max = 51820 2026-03-05 00:02:29.236348 | orchestrator | + port_range_min = 51820 2026-03-05 00:02:29.236352 | orchestrator | + protocol = "udp" 2026-03-05 00:02:29.236356 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.236359 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-05 00:02:29.236363 | orchestrator | + remote_group_id = (known after apply) 2026-03-05 00:02:29.236367 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-05 00:02:29.236371 | orchestrator | + security_group_id = (known after apply) 2026-03-05 00:02:29.236374 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:29.236378 | orchestrator | } 2026-03-05 00:02:29.236382 | orchestrator | 2026-03-05 00:02:29.236386 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-05 00:02:29.236390 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-05 00:02:29.236396 | orchestrator | + direction = "ingress" 2026-03-05 00:02:29.236400 | orchestrator | + ethertype = "IPv4" 2026-03-05 00:02:29.236404 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.236407 | orchestrator | + protocol = "tcp" 2026-03-05 00:02:29.236411 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.236415 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-05 00:02:29.236419 | orchestrator | + remote_group_id = (known after apply) 2026-03-05 00:02:29.236422 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-05 00:02:29.236426 | orchestrator | + security_group_id = (known after apply) 2026-03-05 00:02:29.236430 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:29.236434 | orchestrator | } 2026-03-05 00:02:29.236438 | orchestrator | 2026-03-05 00:02:29.236442 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-05 00:02:29.236445 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-05 00:02:29.236449 | orchestrator | + direction = "ingress" 2026-03-05 00:02:29.236453 | orchestrator | + ethertype = "IPv4" 2026-03-05 00:02:29.236457 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.236461 | orchestrator | + protocol = "udp" 2026-03-05 00:02:29.236465 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.236468 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-05 00:02:29.236472 | orchestrator | + remote_group_id = (known after apply) 2026-03-05 00:02:29.236476 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-05 00:02:29.236479 | orchestrator | + security_group_id = (known after apply) 2026-03-05 00:02:29.236483 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:29.236487 | orchestrator | } 2026-03-05 00:02:29.236491 | orchestrator | 2026-03-05 00:02:29.236495 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-05 00:02:29.236501 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-05 00:02:29.236505 | orchestrator | + direction = "ingress" 2026-03-05 00:02:29.236509 | orchestrator | + ethertype = "IPv4" 2026-03-05 00:02:29.236513 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.236517 | orchestrator | + protocol = "icmp" 2026-03-05 00:02:29.236520 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.236524 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-05 00:02:29.236528 | orchestrator | + remote_group_id = (known after apply) 2026-03-05 00:02:29.236532 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-05 00:02:29.236536 | orchestrator | + security_group_id = (known after apply) 2026-03-05 00:02:29.236539 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:29.236543 | orchestrator | } 2026-03-05 00:02:29.236547 | orchestrator | 2026-03-05 00:02:29.236551 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-05 00:02:29.236555 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-05 00:02:29.236558 | orchestrator | + direction = "ingress" 2026-03-05 00:02:29.236562 | orchestrator | + ethertype = "IPv4" 2026-03-05 00:02:29.236566 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.236570 | orchestrator | + protocol = "tcp" 2026-03-05 00:02:29.236574 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.236577 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-05 00:02:29.236581 | orchestrator | + remote_group_id = (known after apply) 2026-03-05 00:02:29.236585 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-05 00:02:29.236589 | orchestrator | + security_group_id = (known after apply) 2026-03-05 00:02:29.236596 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:29.236602 | orchestrator | } 2026-03-05 00:02:29.236608 | orchestrator | 2026-03-05 00:02:29.236614 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-05 00:02:29.236620 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-05 00:02:29.236626 | orchestrator | + direction = "ingress" 2026-03-05 00:02:29.236632 | orchestrator | + ethertype = "IPv4" 2026-03-05 00:02:29.236638 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.236643 | orchestrator | + protocol = "udp" 2026-03-05 00:02:29.236648 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.236654 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-05 00:02:29.236660 | orchestrator | + remote_group_id = (known after apply) 2026-03-05 00:02:29.236665 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-05 00:02:29.236672 | orchestrator | + security_group_id = (known after apply) 2026-03-05 00:02:29.236677 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:29.236683 | orchestrator | } 2026-03-05 00:02:29.236689 | orchestrator | 2026-03-05 00:02:29.236694 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-05 00:02:29.236700 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-05 00:02:29.236706 | orchestrator | + direction = "ingress" 2026-03-05 00:02:29.236712 | orchestrator | + ethertype = "IPv4" 2026-03-05 00:02:29.236717 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.236723 | orchestrator | + protocol = "icmp" 2026-03-05 00:02:29.236728 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.236735 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-05 00:02:29.236741 | orchestrator | + remote_group_id = (known after apply) 2026-03-05 00:02:29.236746 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-05 00:02:29.236752 | orchestrator | + security_group_id = (known after apply) 2026-03-05 00:02:29.236758 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:29.236768 | orchestrator | } 2026-03-05 00:02:29.236774 | orchestrator | 2026-03-05 00:02:29.236779 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-05 00:02:29.236785 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-05 00:02:29.236790 | orchestrator | + description = "vrrp" 2026-03-05 00:02:29.236796 | orchestrator | + direction = "ingress" 2026-03-05 00:02:29.236802 | orchestrator | + ethertype = "IPv4" 2026-03-05 00:02:29.236807 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.236814 | orchestrator | + protocol = "112" 2026-03-05 00:02:29.236819 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.236825 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-05 00:02:29.236830 | orchestrator | + remote_group_id = (known after apply) 2026-03-05 00:02:29.236836 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-05 00:02:29.236841 | orchestrator | + security_group_id = (known after apply) 2026-03-05 00:02:29.236847 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:29.236853 | orchestrator | } 2026-03-05 00:02:29.236858 | orchestrator | 2026-03-05 00:02:29.236864 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-05 00:02:29.236870 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-05 00:02:29.236876 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:29.236882 | orchestrator | + description = "management security group" 2026-03-05 00:02:29.236888 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.236894 | orchestrator | + name = "testbed-management" 2026-03-05 00:02:29.236901 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.236906 | orchestrator | + stateful = (known after apply) 2026-03-05 00:02:29.236912 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:29.236918 | orchestrator | } 2026-03-05 00:02:29.236925 | orchestrator | 2026-03-05 00:02:29.236930 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-05 00:02:29.236936 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-05 00:02:29.236957 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:29.236963 | orchestrator | + description = "node security group" 2026-03-05 00:02:29.236969 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.236975 | orchestrator | + name = "testbed-node" 2026-03-05 00:02:29.236982 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.236988 | orchestrator | + stateful = (known after apply) 2026-03-05 00:02:29.236994 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:29.237001 | orchestrator | } 2026-03-05 00:02:29.237006 | orchestrator | 2026-03-05 00:02:29.237012 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-05 00:02:29.237019 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-05 00:02:29.237025 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:29.237030 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-05 00:02:29.237036 | orchestrator | + dns_nameservers = [ 2026-03-05 00:02:29.237042 | orchestrator | + "8.8.8.8", 2026-03-05 00:02:29.237048 | orchestrator | + "9.9.9.9", 2026-03-05 00:02:29.237054 | orchestrator | ] 2026-03-05 00:02:29.237060 | orchestrator | + enable_dhcp = true 2026-03-05 00:02:29.237066 | orchestrator | + gateway_ip = (known after apply) 2026-03-05 00:02:29.237078 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.237085 | orchestrator | + ip_version = 4 2026-03-05 00:02:29.237090 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-05 00:02:29.237096 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-05 00:02:29.237102 | orchestrator | + name = "subnet-testbed-management" 2026-03-05 00:02:29.237108 | orchestrator | + network_id = (known after apply) 2026-03-05 00:02:29.237114 | orchestrator | + no_gateway = false 2026-03-05 00:02:29.237119 | orchestrator | + region = (known after apply) 2026-03-05 00:02:29.237126 | orchestrator | + service_types = (known after apply) 2026-03-05 00:02:29.237138 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:29.237143 | orchestrator | 2026-03-05 00:02:29.237149 | orchestrator | + allocation_pool { 2026-03-05 00:02:29.237154 | orchestrator | + end = "192.168.31.250" 2026-03-05 00:02:29.237159 | orchestrator | + start = "192.168.31.200" 2026-03-05 00:02:29.237165 | orchestrator | } 2026-03-05 00:02:29.237170 | orchestrator | } 2026-03-05 00:02:29.237176 | orchestrator | 2026-03-05 00:02:29.237181 | orchestrator | # terraform_data.image will be created 2026-03-05 00:02:29.237187 | orchestrator | + resource "terraform_data" "image" { 2026-03-05 00:02:29.237193 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.237206 | orchestrator | + input = "Ubuntu 24.04" 2026-03-05 00:02:29.237212 | orchestrator | + output = (known after apply) 2026-03-05 00:02:29.237219 | orchestrator | } 2026-03-05 00:02:29.237225 | orchestrator | 2026-03-05 00:02:29.237231 | orchestrator | # terraform_data.image_node will be created 2026-03-05 00:02:29.237236 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-05 00:02:29.237243 | orchestrator | + id = (known after apply) 2026-03-05 00:02:29.237248 | orchestrator | + input = "Ubuntu 24.04" 2026-03-05 00:02:29.237254 | orchestrator | + output = (known after apply) 2026-03-05 00:02:29.237259 | orchestrator | } 2026-03-05 00:02:29.237265 | orchestrator | 2026-03-05 00:02:29.237270 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-05 00:02:29.237275 | orchestrator | 2026-03-05 00:02:29.237281 | orchestrator | Changes to Outputs: 2026-03-05 00:02:29.237286 | orchestrator | + manager_address = (sensitive value) 2026-03-05 00:02:29.237292 | orchestrator | + private_key = (sensitive value) 2026-03-05 00:02:29.278100 | orchestrator | terraform_data.image_node: Creating... 2026-03-05 00:02:29.278178 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=151bae20-16b2-7d14-4e6a-1fb4cb4ed246] 2026-03-05 00:02:32.674081 | orchestrator | terraform_data.image: Creating... 2026-03-05 00:02:32.674169 | orchestrator | terraform_data.image: Creation complete after 0s [id=26bef43f-1e9f-1a7b-aa92-0ed10a2cdad5] 2026-03-05 00:02:32.681071 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-05 00:02:32.681120 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-05 00:02:32.706056 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-05 00:02:32.706101 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-05 00:02:32.706115 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-05 00:02:32.706119 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-05 00:02:32.706123 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-05 00:02:32.706127 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-05 00:02:32.706132 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-05 00:02:32.713376 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-05 00:02:33.138993 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-05 00:02:33.144917 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-05 00:02:33.146408 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-05 00:02:33.150907 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-05 00:02:33.176804 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-03-05 00:02:33.184977 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-05 00:02:33.947544 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=ce3fb987-3f47-4769-8950-55831f214cc8] 2026-03-05 00:02:33.958569 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-05 00:02:36.294530 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=f3d47084-7273-4e4c-b048-5cf25f7ffc67] 2026-03-05 00:02:36.303106 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-05 00:02:36.303199 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=4cc63ee5-51dc-4d14-b9fb-faf031b30aaa] 2026-03-05 00:02:36.309445 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-05 00:02:36.313270 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=51d29519-c1f9-43c2-8da2-810d6ee2cf1d] 2026-03-05 00:02:36.317215 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-05 00:02:36.321924 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=46af06ac-e806-45b3-baa6-786374d24d95] 2026-03-05 00:02:36.328269 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-05 00:02:36.398930 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=db99048b-c1ef-4f9e-82d3-cd84d3f63e80] 2026-03-05 00:02:36.407340 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-05 00:02:36.411116 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=01b35dfc-cc13-430f-9521-065aaefb7085] 2026-03-05 00:02:36.418054 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-05 00:02:36.448018 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=5fc6e5d1-feaa-44be-badf-9551630a8ded] 2026-03-05 00:02:36.466362 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-05 00:02:36.471196 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=c02f0fe8e3c26979817670d7b8a453c8f3fd5514] 2026-03-05 00:02:36.481343 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-05 00:02:36.491813 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=ba315b2ea214aafd1f40ab528b864720b58c9cb4] 2026-03-05 00:02:36.496172 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-05 00:02:36.550687 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=fde3cda2-3067-4d86-95c6-d39f62804520] 2026-03-05 00:02:36.573201 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=94265553-26b7-47c9-a922-5463d2be5f34] 2026-03-05 00:02:37.362282 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=a8ef373d-acca-493f-badf-3a0028b34dd0] 2026-03-05 00:02:37.576164 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=1df59f15-0a56-4b5d-b259-d63271edf207] 2026-03-05 00:02:37.581241 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-05 00:02:39.736970 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=7eb31bae-b884-479c-be78-88b42a5c2c50] 2026-03-05 00:02:39.741437 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=97a4e51c-10c3-49c3-9fc4-94e957861be7] 2026-03-05 00:02:39.746574 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=d85b406f-f47a-4803-8455-48f8dde86a68] 2026-03-05 00:02:39.808652 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=5e11f2b2-c673-403d-8d8e-e558e292c82f] 2026-03-05 00:02:39.851197 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=834c07da-6670-4f26-8062-9b7380900cd1] 2026-03-05 00:02:39.871533 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=26e3da3f-ebef-4f2e-987f-6c33458d570f] 2026-03-05 00:02:41.619053 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=ba213bf6-1b5b-4ff5-9ed0-637c71f8334b] 2026-03-05 00:02:41.624324 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-05 00:02:41.629400 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-05 00:02:41.629672 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-05 00:02:41.908265 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=0fff73e1-3dbe-4fd0-9304-b4912a1fff28] 2026-03-05 00:02:41.914774 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=c3d1b77e-e194-4a18-af37-dc6196693076] 2026-03-05 00:02:41.917826 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-05 00:02:41.918132 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-05 00:02:41.919624 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-05 00:02:41.920425 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-05 00:02:41.921258 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-05 00:02:41.927035 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-05 00:02:41.931405 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-05 00:02:41.936536 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-05 00:02:41.937997 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-05 00:02:42.317061 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=e9e3a0f3-d67e-4629-88ed-b62c88ad73b9] 2026-03-05 00:02:42.324183 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-05 00:02:42.626426 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=f71a7a48-170f-47ee-bb90-212935fbb9c8] 2026-03-05 00:02:42.633511 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-05 00:02:42.674721 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=a0af91fa-aa84-470d-9e8b-60d02bf34bc3] 2026-03-05 00:02:42.683775 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-05 00:02:42.847565 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=a574bc05-ea84-4981-90c3-955012795b19] 2026-03-05 00:02:42.850470 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=b16553b2-957e-452c-87d0-a9b625bc8f22] 2026-03-05 00:02:42.852379 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-05 00:02:42.854970 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-05 00:02:42.921465 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=d5f5f44e-3f69-41bd-bac8-0bdf9beb500f] 2026-03-05 00:02:42.927554 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-05 00:02:43.032660 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=03f36fbb-c214-4673-8a3a-b22c7c639269] 2026-03-05 00:02:43.039093 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-05 00:02:43.134677 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=97686bb2-5ba3-404b-aa0c-d753514104ff] 2026-03-05 00:02:43.156852 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=1daf1455-cfa4-4bd0-b77e-3de528cabe40] 2026-03-05 00:02:43.226930 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=b117d6e5-cb3a-4f5c-816b-48f70c7ff90c] 2026-03-05 00:02:43.368421 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=bd76a428-4e84-49e0-ab8a-9def6185d6e4] 2026-03-05 00:02:43.442570 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=0cc5ce0e-5376-41f8-b95f-d9f6e2a855d4] 2026-03-05 00:02:43.613363 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=ec0f0e3a-ee19-439e-95e1-5a610e6e1531] 2026-03-05 00:02:43.629797 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=12e89b71-4588-4016-928c-02d675bee8b8] 2026-03-05 00:02:43.929582 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=e4d32e28-c22e-45d0-bdb8-b2cdfbee045d] 2026-03-05 00:02:44.581170 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 3s [id=9886ac95-7dec-44b7-b7f5-c2430c756a58] 2026-03-05 00:02:45.713706 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=09a0cb4b-8c7f-4020-8252-8726c2325e21] 2026-03-05 00:02:45.735843 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-05 00:02:45.741530 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-05 00:02:45.747839 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-05 00:02:45.751803 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-05 00:02:45.758697 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-05 00:02:45.758796 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-05 00:02:45.765347 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-05 00:02:48.020666 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=9248951c-da68-4997-9bdf-de0e82e4c4ad] 2026-03-05 00:02:48.025519 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-05 00:02:48.033891 | orchestrator | local_file.inventory: Creating... 2026-03-05 00:02:48.035944 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-05 00:02:48.039205 | orchestrator | local_file.inventory: Creation complete after 0s [id=ccfcf55b5bed7e64e46634b9281a9f72d139e640] 2026-03-05 00:02:48.040909 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=b4d5b478d2947edf00c1146e42bc6a76632b8961] 2026-03-05 00:02:48.933501 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=9248951c-da68-4997-9bdf-de0e82e4c4ad] 2026-03-05 00:02:55.737713 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-05 00:02:55.753478 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-05 00:02:55.755668 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-05 00:02:55.762133 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-05 00:02:55.762853 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-05 00:02:55.766212 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-05 00:03:05.742583 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-05 00:03:05.754105 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-05 00:03:05.756420 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-05 00:03:05.762835 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-05 00:03:05.764030 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-05 00:03:05.767278 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-05 00:03:15.752281 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-05 00:03:15.754476 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-05 00:03:15.756858 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-05 00:03:15.763197 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-05 00:03:15.764413 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-05 00:03:15.767841 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-05 00:03:16.517527 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=6c7f0ee2-e2ed-4eca-b981-92ae72977176] 2026-03-05 00:03:16.566928 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=f9e3f617-7965-42f3-8293-cbef9632d4d4] 2026-03-05 00:03:16.610241 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=be21cdf6-08a1-4f43-8266-a5c30869643e] 2026-03-05 00:03:25.764190 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-03-05 00:03:25.765417 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-03-05 00:03:25.768795 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-03-05 00:03:26.664199 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=0d2aeca6-7e04-4fd2-8797-acf68fe1ef93] 2026-03-05 00:03:26.706587 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=1d638172-c0c3-49c8-af87-0e1a804d9a77] 2026-03-05 00:03:26.809612 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=05faf079-4db2-4504-b0d4-0652632147a1] 2026-03-05 00:03:26.829517 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-05 00:03:26.839156 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-05 00:03:26.839225 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-05 00:03:26.843531 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-05 00:03:26.843593 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=8728877357153744487] 2026-03-05 00:03:26.850609 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-05 00:03:26.853647 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-05 00:03:26.863941 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-05 00:03:26.866430 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-05 00:03:26.870457 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-05 00:03:26.870849 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-05 00:03:26.874548 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-05 00:03:30.255851 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=1d638172-c0c3-49c8-af87-0e1a804d9a77/f3d47084-7273-4e4c-b048-5cf25f7ffc67] 2026-03-05 00:03:30.281878 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=0d2aeca6-7e04-4fd2-8797-acf68fe1ef93/db99048b-c1ef-4f9e-82d3-cd84d3f63e80] 2026-03-05 00:03:30.317967 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=f9e3f617-7965-42f3-8293-cbef9632d4d4/4cc63ee5-51dc-4d14-b9fb-faf031b30aaa] 2026-03-05 00:03:30.372456 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=1d638172-c0c3-49c8-af87-0e1a804d9a77/01b35dfc-cc13-430f-9521-065aaefb7085] 2026-03-05 00:03:30.398505 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=f9e3f617-7965-42f3-8293-cbef9632d4d4/5fc6e5d1-feaa-44be-badf-9551630a8ded] 2026-03-05 00:03:30.399698 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=0d2aeca6-7e04-4fd2-8797-acf68fe1ef93/94265553-26b7-47c9-a922-5463d2be5f34] 2026-03-05 00:03:36.460142 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 9s [id=1d638172-c0c3-49c8-af87-0e1a804d9a77/51d29519-c1f9-43c2-8da2-810d6ee2cf1d] 2026-03-05 00:03:36.485372 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 9s [id=f9e3f617-7965-42f3-8293-cbef9632d4d4/fde3cda2-3067-4d86-95c6-d39f62804520] 2026-03-05 00:03:36.505942 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=0d2aeca6-7e04-4fd2-8797-acf68fe1ef93/46af06ac-e806-45b3-baa6-786374d24d95] 2026-03-05 00:03:36.877598 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-05 00:03:46.877760 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-05 00:03:47.289460 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=278e6150-6131-4726-ac96-5a086c681304] 2026-03-05 00:03:47.319466 | orchestrator | 2026-03-05 00:03:47.319584 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-05 00:03:47.319607 | orchestrator | 2026-03-05 00:03:47.319652 | orchestrator | Outputs: 2026-03-05 00:03:47.319668 | orchestrator | 2026-03-05 00:03:47.319684 | orchestrator | manager_address = 2026-03-05 00:03:47.319701 | orchestrator | private_key = 2026-03-05 00:03:47.427767 | orchestrator | ok: Runtime: 0:01:23.125028 2026-03-05 00:03:47.460064 | 2026-03-05 00:03:47.460194 | TASK [Create infrastructure (stable)] 2026-03-05 00:03:47.994120 | orchestrator | skipping: Conditional result was False 2026-03-05 00:03:48.008416 | 2026-03-05 00:03:48.008547 | TASK [Fetch manager address] 2026-03-05 00:03:48.491195 | orchestrator | ok 2026-03-05 00:03:48.500709 | 2026-03-05 00:03:48.500856 | TASK [Set manager_host address] 2026-03-05 00:03:48.578449 | orchestrator | ok 2026-03-05 00:03:48.588591 | 2026-03-05 00:03:48.588716 | LOOP [Update ansible collections] 2026-03-05 00:03:49.760996 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-05 00:03:49.761321 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-05 00:03:49.761377 | orchestrator | Starting galaxy collection install process 2026-03-05 00:03:49.761415 | orchestrator | Process install dependency map 2026-03-05 00:03:49.761450 | orchestrator | Starting collection install process 2026-03-05 00:03:49.761483 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2026-03-05 00:03:49.761520 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2026-03-05 00:03:49.761568 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-05 00:03:49.761645 | orchestrator | ok: Item: commons Runtime: 0:00:00.829493 2026-03-05 00:03:50.967586 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-05 00:03:50.967728 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-05 00:03:50.967773 | orchestrator | Starting galaxy collection install process 2026-03-05 00:03:50.967826 | orchestrator | Process install dependency map 2026-03-05 00:03:50.967860 | orchestrator | Starting collection install process 2026-03-05 00:03:50.967891 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2026-03-05 00:03:50.967921 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2026-03-05 00:03:50.967949 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-05 00:03:50.967991 | orchestrator | ok: Item: services Runtime: 0:00:00.855549 2026-03-05 00:03:50.989393 | 2026-03-05 00:03:50.989546 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-05 00:04:01.571941 | orchestrator | ok 2026-03-05 00:04:01.581651 | 2026-03-05 00:04:01.581839 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-05 00:05:01.629433 | orchestrator | ok 2026-03-05 00:05:01.641088 | 2026-03-05 00:05:01.641245 | TASK [Fetch manager ssh hostkey] 2026-03-05 00:05:03.225002 | orchestrator | Output suppressed because no_log was given 2026-03-05 00:05:03.238704 | 2026-03-05 00:05:03.238928 | TASK [Get ssh keypair from terraform environment] 2026-03-05 00:05:03.775031 | orchestrator | ok: Runtime: 0:00:00.010784 2026-03-05 00:05:03.792966 | 2026-03-05 00:05:03.793249 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-05 00:05:03.842366 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-05 00:05:03.853588 | 2026-03-05 00:05:03.853736 | TASK [Run manager part 0] 2026-03-05 00:05:04.876523 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-05 00:05:04.928748 | orchestrator | 2026-03-05 00:05:04.928810 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-05 00:05:04.928823 | orchestrator | 2026-03-05 00:05:04.928843 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-05 00:05:07.049098 | orchestrator | ok: [testbed-manager] 2026-03-05 00:05:07.049164 | orchestrator | 2026-03-05 00:05:07.049192 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-05 00:05:07.049204 | orchestrator | 2026-03-05 00:05:07.049215 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-05 00:05:09.230486 | orchestrator | ok: [testbed-manager] 2026-03-05 00:05:09.230557 | orchestrator | 2026-03-05 00:05:09.230571 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-05 00:05:09.994250 | orchestrator | ok: [testbed-manager] 2026-03-05 00:05:09.994580 | orchestrator | 2026-03-05 00:05:09.994615 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-05 00:05:10.057016 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:05:10.057080 | orchestrator | 2026-03-05 00:05:10.057090 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-05 00:05:10.092972 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:05:10.093331 | orchestrator | 2026-03-05 00:05:10.093351 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-05 00:05:10.132225 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:05:10.132280 | orchestrator | 2026-03-05 00:05:10.132288 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-05 00:05:10.161855 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:05:10.161903 | orchestrator | 2026-03-05 00:05:10.161910 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-05 00:05:10.200602 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:05:10.200781 | orchestrator | 2026-03-05 00:05:10.200795 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-05 00:05:10.230821 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:05:10.230879 | orchestrator | 2026-03-05 00:05:10.230889 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-05 00:05:10.261272 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:05:10.261344 | orchestrator | 2026-03-05 00:05:10.261358 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-05 00:05:11.163946 | orchestrator | changed: [testbed-manager] 2026-03-05 00:05:11.164003 | orchestrator | 2026-03-05 00:05:11.164010 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-05 00:08:20.381768 | orchestrator | changed: [testbed-manager] 2026-03-05 00:08:20.381894 | orchestrator | 2026-03-05 00:08:20.381916 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-05 00:09:51.416311 | orchestrator | changed: [testbed-manager] 2026-03-05 00:09:51.416395 | orchestrator | 2026-03-05 00:09:51.416409 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-05 00:10:14.518488 | orchestrator | changed: [testbed-manager] 2026-03-05 00:10:14.518595 | orchestrator | 2026-03-05 00:10:14.518616 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-05 00:10:25.268438 | orchestrator | changed: [testbed-manager] 2026-03-05 00:10:25.268482 | orchestrator | 2026-03-05 00:10:25.268490 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-05 00:10:25.326395 | orchestrator | ok: [testbed-manager] 2026-03-05 00:10:25.326436 | orchestrator | 2026-03-05 00:10:25.326445 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-05 00:10:26.141286 | orchestrator | ok: [testbed-manager] 2026-03-05 00:10:26.141375 | orchestrator | 2026-03-05 00:10:26.141394 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-05 00:10:26.883920 | orchestrator | changed: [testbed-manager] 2026-03-05 00:10:26.883970 | orchestrator | 2026-03-05 00:10:26.883984 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-05 00:10:33.437192 | orchestrator | changed: [testbed-manager] 2026-03-05 00:10:33.437280 | orchestrator | 2026-03-05 00:10:33.437320 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-05 00:10:41.364698 | orchestrator | changed: [testbed-manager] 2026-03-05 00:10:41.364750 | orchestrator | 2026-03-05 00:10:41.364763 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-05 00:10:44.132510 | orchestrator | changed: [testbed-manager] 2026-03-05 00:10:44.132585 | orchestrator | 2026-03-05 00:10:44.132594 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-05 00:10:45.919233 | orchestrator | changed: [testbed-manager] 2026-03-05 00:10:45.919287 | orchestrator | 2026-03-05 00:10:45.919299 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-05 00:10:47.069303 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-05 00:10:47.069419 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-05 00:10:47.069446 | orchestrator | 2026-03-05 00:10:47.069468 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-05 00:10:47.114892 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-05 00:10:47.114977 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-05 00:10:47.115028 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-05 00:10:47.115041 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-05 00:10:51.530475 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-05 00:10:51.530564 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-05 00:10:51.530579 | orchestrator | 2026-03-05 00:10:51.530592 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-05 00:10:52.106762 | orchestrator | changed: [testbed-manager] 2026-03-05 00:10:52.106848 | orchestrator | 2026-03-05 00:10:52.106866 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-05 00:11:17.642308 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-05 00:11:17.642412 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-05 00:11:17.642431 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-05 00:11:17.642443 | orchestrator | 2026-03-05 00:11:17.642456 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-05 00:11:20.106964 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-05 00:11:20.107059 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-05 00:11:20.107075 | orchestrator | 2026-03-05 00:11:20.107088 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-05 00:11:20.107100 | orchestrator | 2026-03-05 00:11:20.107111 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-05 00:11:21.573614 | orchestrator | ok: [testbed-manager] 2026-03-05 00:11:21.573709 | orchestrator | 2026-03-05 00:11:21.573726 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-05 00:11:21.623096 | orchestrator | ok: [testbed-manager] 2026-03-05 00:11:21.623154 | orchestrator | 2026-03-05 00:11:21.623164 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-05 00:11:21.691536 | orchestrator | ok: [testbed-manager] 2026-03-05 00:11:21.691590 | orchestrator | 2026-03-05 00:11:21.691597 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-05 00:11:22.526817 | orchestrator | changed: [testbed-manager] 2026-03-05 00:11:22.527579 | orchestrator | 2026-03-05 00:11:22.527605 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-05 00:11:23.269029 | orchestrator | changed: [testbed-manager] 2026-03-05 00:11:23.269113 | orchestrator | 2026-03-05 00:11:23.269127 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-05 00:11:25.434288 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-05 00:11:25.434378 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-05 00:11:25.434395 | orchestrator | 2026-03-05 00:11:25.434428 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-05 00:11:26.909623 | orchestrator | changed: [testbed-manager] 2026-03-05 00:11:26.909767 | orchestrator | 2026-03-05 00:11:26.909798 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-05 00:11:28.753392 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-05 00:11:28.753491 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-05 00:11:28.753507 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-05 00:11:28.753519 | orchestrator | 2026-03-05 00:11:28.753532 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-05 00:11:28.808932 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:11:28.809034 | orchestrator | 2026-03-05 00:11:28.809057 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-05 00:11:28.875746 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:11:28.875844 | orchestrator | 2026-03-05 00:11:28.875867 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-05 00:11:29.471381 | orchestrator | changed: [testbed-manager] 2026-03-05 00:11:29.471423 | orchestrator | 2026-03-05 00:11:29.471432 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-05 00:11:29.544587 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:11:29.544627 | orchestrator | 2026-03-05 00:11:29.544635 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-05 00:11:30.488400 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-05 00:11:30.488440 | orchestrator | changed: [testbed-manager] 2026-03-05 00:11:30.488448 | orchestrator | 2026-03-05 00:11:30.488454 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-05 00:11:30.525983 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:11:30.526059 | orchestrator | 2026-03-05 00:11:30.526068 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-05 00:11:30.567645 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:11:30.567689 | orchestrator | 2026-03-05 00:11:30.567698 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-05 00:11:30.606543 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:11:30.606587 | orchestrator | 2026-03-05 00:11:30.606598 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-05 00:11:30.699612 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:11:30.699652 | orchestrator | 2026-03-05 00:11:30.699660 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-05 00:11:31.491922 | orchestrator | ok: [testbed-manager] 2026-03-05 00:11:31.492021 | orchestrator | 2026-03-05 00:11:31.492039 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-05 00:11:31.492051 | orchestrator | 2026-03-05 00:11:31.492063 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-05 00:11:32.920715 | orchestrator | ok: [testbed-manager] 2026-03-05 00:11:32.920763 | orchestrator | 2026-03-05 00:11:32.920770 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-05 00:11:34.023223 | orchestrator | changed: [testbed-manager] 2026-03-05 00:11:34.023299 | orchestrator | 2026-03-05 00:11:34.023316 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:11:34.023329 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-05 00:11:34.023344 | orchestrator | 2026-03-05 00:11:34.627646 | orchestrator | ok: Runtime: 0:06:30.083199 2026-03-05 00:11:34.645856 | 2026-03-05 00:11:34.646001 | TASK [Point out that the log in on the manager is now possible] 2026-03-05 00:11:34.694133 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-05 00:11:34.704888 | 2026-03-05 00:11:34.705010 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-05 00:11:34.748522 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-05 00:11:34.765688 | 2026-03-05 00:11:34.765850 | TASK [Run manager part 1 + 2] 2026-03-05 00:11:35.617688 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-05 00:11:35.674319 | orchestrator | 2026-03-05 00:11:35.674365 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-05 00:11:35.674373 | orchestrator | 2026-03-05 00:11:35.674385 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-05 00:11:38.202340 | orchestrator | ok: [testbed-manager] 2026-03-05 00:11:38.202665 | orchestrator | 2026-03-05 00:11:38.202746 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-05 00:11:38.236992 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:11:38.237037 | orchestrator | 2026-03-05 00:11:38.237046 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-05 00:11:38.279634 | orchestrator | ok: [testbed-manager] 2026-03-05 00:11:38.279693 | orchestrator | 2026-03-05 00:11:38.279705 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-05 00:11:38.318090 | orchestrator | ok: [testbed-manager] 2026-03-05 00:11:38.318138 | orchestrator | 2026-03-05 00:11:38.318147 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-05 00:11:38.398816 | orchestrator | ok: [testbed-manager] 2026-03-05 00:11:38.398868 | orchestrator | 2026-03-05 00:11:38.398875 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-05 00:11:38.471910 | orchestrator | ok: [testbed-manager] 2026-03-05 00:11:38.471946 | orchestrator | 2026-03-05 00:11:38.471953 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-05 00:11:38.509221 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-05 00:11:38.509933 | orchestrator | 2026-03-05 00:11:38.509960 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-05 00:11:39.248179 | orchestrator | ok: [testbed-manager] 2026-03-05 00:11:39.248249 | orchestrator | 2026-03-05 00:11:39.248267 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-05 00:11:39.305796 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:11:39.305861 | orchestrator | 2026-03-05 00:11:39.305876 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-05 00:11:40.749290 | orchestrator | changed: [testbed-manager] 2026-03-05 00:11:40.749351 | orchestrator | 2026-03-05 00:11:40.749364 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-05 00:11:41.336165 | orchestrator | ok: [testbed-manager] 2026-03-05 00:11:41.336202 | orchestrator | 2026-03-05 00:11:41.336208 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-05 00:11:42.520156 | orchestrator | changed: [testbed-manager] 2026-03-05 00:11:42.520247 | orchestrator | 2026-03-05 00:11:42.520271 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-05 00:11:58.358088 | orchestrator | changed: [testbed-manager] 2026-03-05 00:11:58.358135 | orchestrator | 2026-03-05 00:11:58.358143 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-05 00:11:59.063475 | orchestrator | ok: [testbed-manager] 2026-03-05 00:11:59.063512 | orchestrator | 2026-03-05 00:11:59.063519 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-05 00:11:59.115516 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:11:59.115552 | orchestrator | 2026-03-05 00:11:59.115558 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-05 00:12:00.121243 | orchestrator | changed: [testbed-manager] 2026-03-05 00:12:00.121384 | orchestrator | 2026-03-05 00:12:00.121413 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-05 00:12:01.116361 | orchestrator | changed: [testbed-manager] 2026-03-05 00:12:01.116450 | orchestrator | 2026-03-05 00:12:01.116465 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-05 00:12:01.703728 | orchestrator | changed: [testbed-manager] 2026-03-05 00:12:01.703800 | orchestrator | 2026-03-05 00:12:01.703812 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-05 00:12:01.742099 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-05 00:12:01.742181 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-05 00:12:01.742191 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-05 00:12:01.742198 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-05 00:12:04.393288 | orchestrator | changed: [testbed-manager] 2026-03-05 00:12:04.393504 | orchestrator | 2026-03-05 00:12:04.393529 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-05 00:12:13.515424 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-05 00:12:13.515526 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-05 00:12:13.515545 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-05 00:12:13.515558 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-05 00:12:13.515577 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-05 00:12:13.515589 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-05 00:12:13.515600 | orchestrator | 2026-03-05 00:12:13.515612 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-05 00:12:14.609356 | orchestrator | changed: [testbed-manager] 2026-03-05 00:12:14.609447 | orchestrator | 2026-03-05 00:12:14.609461 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-05 00:12:14.648751 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:12:14.648796 | orchestrator | 2026-03-05 00:12:14.648803 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-05 00:12:17.834448 | orchestrator | changed: [testbed-manager] 2026-03-05 00:12:17.834508 | orchestrator | 2026-03-05 00:12:17.834516 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-05 00:12:17.873463 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:12:17.873509 | orchestrator | 2026-03-05 00:12:17.873515 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-05 00:13:57.855696 | orchestrator | changed: [testbed-manager] 2026-03-05 00:13:57.855783 | orchestrator | 2026-03-05 00:13:57.855796 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-05 00:13:59.056535 | orchestrator | ok: [testbed-manager] 2026-03-05 00:13:59.056580 | orchestrator | 2026-03-05 00:13:59.056589 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:13:59.056598 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-05 00:13:59.056605 | orchestrator | 2026-03-05 00:13:59.399752 | orchestrator | ok: Runtime: 0:02:24.075458 2026-03-05 00:13:59.416463 | 2026-03-05 00:13:59.416602 | TASK [Reboot manager] 2026-03-05 00:14:00.953571 | orchestrator | ok: Runtime: 0:00:00.971474 2026-03-05 00:14:00.970518 | 2026-03-05 00:14:00.970686 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-05 00:14:17.416550 | orchestrator | ok 2026-03-05 00:14:17.428778 | 2026-03-05 00:14:17.428945 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-05 00:15:17.479267 | orchestrator | ok 2026-03-05 00:15:17.489453 | 2026-03-05 00:15:17.489590 | TASK [Deploy manager + bootstrap nodes] 2026-03-05 00:15:19.988517 | orchestrator | 2026-03-05 00:15:19.988696 | orchestrator | # DEPLOY MANAGER 2026-03-05 00:15:19.988717 | orchestrator | 2026-03-05 00:15:19.988730 | orchestrator | + set -e 2026-03-05 00:15:19.988743 | orchestrator | + echo 2026-03-05 00:15:19.988756 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-05 00:15:19.988771 | orchestrator | + echo 2026-03-05 00:15:19.988814 | orchestrator | + cat /opt/manager-vars.sh 2026-03-05 00:15:19.992213 | orchestrator | export NUMBER_OF_NODES=6 2026-03-05 00:15:19.992257 | orchestrator | 2026-03-05 00:15:19.992269 | orchestrator | export CEPH_VERSION=reef 2026-03-05 00:15:19.992282 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-05 00:15:19.992293 | orchestrator | export MANAGER_VERSION=latest 2026-03-05 00:15:19.992316 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-05 00:15:19.992332 | orchestrator | 2026-03-05 00:15:19.992357 | orchestrator | export ARA=false 2026-03-05 00:15:19.992372 | orchestrator | export DEPLOY_MODE=manager 2026-03-05 00:15:19.992393 | orchestrator | export TEMPEST=true 2026-03-05 00:15:19.992410 | orchestrator | export IS_ZUUL=true 2026-03-05 00:15:19.992425 | orchestrator | 2026-03-05 00:15:19.992449 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.12 2026-03-05 00:15:19.992467 | orchestrator | export EXTERNAL_API=false 2026-03-05 00:15:19.992483 | orchestrator | 2026-03-05 00:15:19.992500 | orchestrator | export IMAGE_USER=ubuntu 2026-03-05 00:15:19.992515 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-05 00:15:19.992525 | orchestrator | 2026-03-05 00:15:19.992535 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-05 00:15:19.992553 | orchestrator | 2026-03-05 00:15:19.992564 | orchestrator | + echo 2026-03-05 00:15:19.992575 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-05 00:15:19.993684 | orchestrator | ++ export INTERACTIVE=false 2026-03-05 00:15:19.993709 | orchestrator | ++ INTERACTIVE=false 2026-03-05 00:15:19.993721 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-05 00:15:19.993734 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-05 00:15:19.993917 | orchestrator | + source /opt/manager-vars.sh 2026-03-05 00:15:19.993933 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-05 00:15:19.993945 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-05 00:15:19.993956 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-05 00:15:19.993968 | orchestrator | ++ CEPH_VERSION=reef 2026-03-05 00:15:19.993991 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-05 00:15:19.994004 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-05 00:15:19.994057 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-05 00:15:19.994070 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-05 00:15:19.994079 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-05 00:15:19.994100 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-05 00:15:19.994110 | orchestrator | ++ export ARA=false 2026-03-05 00:15:19.994120 | orchestrator | ++ ARA=false 2026-03-05 00:15:19.994155 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-05 00:15:19.994165 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-05 00:15:19.994186 | orchestrator | ++ export TEMPEST=true 2026-03-05 00:15:19.994196 | orchestrator | ++ TEMPEST=true 2026-03-05 00:15:19.994206 | orchestrator | ++ export IS_ZUUL=true 2026-03-05 00:15:19.994215 | orchestrator | ++ IS_ZUUL=true 2026-03-05 00:15:19.994231 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.12 2026-03-05 00:15:19.994241 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.12 2026-03-05 00:15:19.994250 | orchestrator | ++ export EXTERNAL_API=false 2026-03-05 00:15:19.994260 | orchestrator | ++ EXTERNAL_API=false 2026-03-05 00:15:19.994273 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-05 00:15:19.994283 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-05 00:15:19.994293 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-05 00:15:19.994303 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-05 00:15:19.994312 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-05 00:15:19.994327 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-05 00:15:19.994340 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-05 00:15:20.056409 | orchestrator | + docker version 2026-03-05 00:15:20.177040 | orchestrator | Client: Docker Engine - Community 2026-03-05 00:15:20.177146 | orchestrator | Version: 27.5.1 2026-03-05 00:15:20.177163 | orchestrator | API version: 1.47 2026-03-05 00:15:20.177178 | orchestrator | Go version: go1.22.11 2026-03-05 00:15:20.177190 | orchestrator | Git commit: 9f9e405 2026-03-05 00:15:20.177201 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-05 00:15:20.177213 | orchestrator | OS/Arch: linux/amd64 2026-03-05 00:15:20.177224 | orchestrator | Context: default 2026-03-05 00:15:20.177235 | orchestrator | 2026-03-05 00:15:20.177247 | orchestrator | Server: Docker Engine - Community 2026-03-05 00:15:20.177258 | orchestrator | Engine: 2026-03-05 00:15:20.177269 | orchestrator | Version: 27.5.1 2026-03-05 00:15:20.177281 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-05 00:15:20.177322 | orchestrator | Go version: go1.22.11 2026-03-05 00:15:20.177334 | orchestrator | Git commit: 4c9b3b0 2026-03-05 00:15:20.177345 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-05 00:15:20.177356 | orchestrator | OS/Arch: linux/amd64 2026-03-05 00:15:20.177367 | orchestrator | Experimental: false 2026-03-05 00:15:20.177378 | orchestrator | containerd: 2026-03-05 00:15:20.177389 | orchestrator | Version: v2.2.1 2026-03-05 00:15:20.177400 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-03-05 00:15:20.177412 | orchestrator | runc: 2026-03-05 00:15:20.177423 | orchestrator | Version: 1.3.4 2026-03-05 00:15:20.177434 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-05 00:15:20.177445 | orchestrator | docker-init: 2026-03-05 00:15:20.177456 | orchestrator | Version: 0.19.0 2026-03-05 00:15:20.177468 | orchestrator | GitCommit: de40ad0 2026-03-05 00:15:20.180013 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-05 00:15:20.188888 | orchestrator | + set -e 2026-03-05 00:15:20.188953 | orchestrator | + source /opt/manager-vars.sh 2026-03-05 00:15:20.188967 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-05 00:15:20.188980 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-05 00:15:20.188991 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-05 00:15:20.189002 | orchestrator | ++ CEPH_VERSION=reef 2026-03-05 00:15:20.189013 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-05 00:15:20.189025 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-05 00:15:20.189036 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-05 00:15:20.189046 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-05 00:15:20.189057 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-05 00:15:20.189068 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-05 00:15:20.189079 | orchestrator | ++ export ARA=false 2026-03-05 00:15:20.189090 | orchestrator | ++ ARA=false 2026-03-05 00:15:20.189100 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-05 00:15:20.189111 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-05 00:15:20.189122 | orchestrator | ++ export TEMPEST=true 2026-03-05 00:15:20.189133 | orchestrator | ++ TEMPEST=true 2026-03-05 00:15:20.189143 | orchestrator | ++ export IS_ZUUL=true 2026-03-05 00:15:20.189154 | orchestrator | ++ IS_ZUUL=true 2026-03-05 00:15:20.189165 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.12 2026-03-05 00:15:20.189176 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.12 2026-03-05 00:15:20.189187 | orchestrator | ++ export EXTERNAL_API=false 2026-03-05 00:15:20.189197 | orchestrator | ++ EXTERNAL_API=false 2026-03-05 00:15:20.189208 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-05 00:15:20.189219 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-05 00:15:20.189230 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-05 00:15:20.189240 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-05 00:15:20.189252 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-05 00:15:20.189262 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-05 00:15:20.189273 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-05 00:15:20.189284 | orchestrator | ++ export INTERACTIVE=false 2026-03-05 00:15:20.189295 | orchestrator | ++ INTERACTIVE=false 2026-03-05 00:15:20.189305 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-05 00:15:20.189321 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-05 00:15:20.189332 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-05 00:15:20.189343 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-05 00:15:20.189354 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-03-05 00:15:20.193715 | orchestrator | + set -e 2026-03-05 00:15:20.193763 | orchestrator | + VERSION=reef 2026-03-05 00:15:20.194547 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-05 00:15:20.202174 | orchestrator | + [[ -n ceph_version: reef ]] 2026-03-05 00:15:20.202252 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-03-05 00:15:20.208649 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-03-05 00:15:20.214724 | orchestrator | + set -e 2026-03-05 00:15:20.214769 | orchestrator | + VERSION=2024.2 2026-03-05 00:15:20.215433 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-05 00:15:20.217526 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-03-05 00:15:20.217561 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-03-05 00:15:20.223459 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-05 00:15:20.223946 | orchestrator | ++ semver latest 7.0.0 2026-03-05 00:15:20.282192 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-05 00:15:20.282408 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-05 00:15:20.282432 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-05 00:15:20.282564 | orchestrator | ++ semver latest 10.0.0-0 2026-03-05 00:15:20.345269 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-05 00:15:20.346153 | orchestrator | ++ semver 2024.2 2025.1 2026-03-05 00:15:20.407622 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-05 00:15:20.407712 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-05 00:15:20.492791 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-05 00:15:20.494217 | orchestrator | + source /opt/venv/bin/activate 2026-03-05 00:15:20.495067 | orchestrator | ++ deactivate nondestructive 2026-03-05 00:15:20.495100 | orchestrator | ++ '[' -n '' ']' 2026-03-05 00:15:20.495115 | orchestrator | ++ '[' -n '' ']' 2026-03-05 00:15:20.495129 | orchestrator | ++ hash -r 2026-03-05 00:15:20.495142 | orchestrator | ++ '[' -n '' ']' 2026-03-05 00:15:20.495155 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-05 00:15:20.495167 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-05 00:15:20.495184 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-05 00:15:20.495348 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-05 00:15:20.495366 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-05 00:15:20.495378 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-05 00:15:20.495389 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-05 00:15:20.495402 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-05 00:15:20.495415 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-05 00:15:20.495426 | orchestrator | ++ export PATH 2026-03-05 00:15:20.495443 | orchestrator | ++ '[' -n '' ']' 2026-03-05 00:15:20.495455 | orchestrator | ++ '[' -z '' ']' 2026-03-05 00:15:20.495466 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-05 00:15:20.495477 | orchestrator | ++ PS1='(venv) ' 2026-03-05 00:15:20.495488 | orchestrator | ++ export PS1 2026-03-05 00:15:20.495500 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-05 00:15:20.495512 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-05 00:15:20.495523 | orchestrator | ++ hash -r 2026-03-05 00:15:20.495560 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-05 00:15:21.813504 | orchestrator | 2026-03-05 00:15:21.813646 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-05 00:15:21.813702 | orchestrator | 2026-03-05 00:15:21.813743 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-05 00:15:22.386769 | orchestrator | ok: [testbed-manager] 2026-03-05 00:15:22.386890 | orchestrator | 2026-03-05 00:15:22.386909 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-05 00:15:23.408186 | orchestrator | changed: [testbed-manager] 2026-03-05 00:15:23.408308 | orchestrator | 2026-03-05 00:15:23.408334 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-05 00:15:23.408354 | orchestrator | 2026-03-05 00:15:23.408372 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-05 00:15:25.883918 | orchestrator | ok: [testbed-manager] 2026-03-05 00:15:25.884024 | orchestrator | 2026-03-05 00:15:25.884041 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-05 00:15:25.947359 | orchestrator | ok: [testbed-manager] 2026-03-05 00:15:25.947460 | orchestrator | 2026-03-05 00:15:25.947478 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-05 00:15:26.414785 | orchestrator | changed: [testbed-manager] 2026-03-05 00:15:26.414950 | orchestrator | 2026-03-05 00:15:26.414968 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-05 00:15:26.459351 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:15:26.459445 | orchestrator | 2026-03-05 00:15:26.459483 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-05 00:15:26.800380 | orchestrator | changed: [testbed-manager] 2026-03-05 00:15:26.800483 | orchestrator | 2026-03-05 00:15:26.800501 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-05 00:15:27.151790 | orchestrator | ok: [testbed-manager] 2026-03-05 00:15:27.151938 | orchestrator | 2026-03-05 00:15:27.151956 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-05 00:15:27.280777 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:15:27.280885 | orchestrator | 2026-03-05 00:15:27.280902 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-05 00:15:27.280914 | orchestrator | 2026-03-05 00:15:27.280926 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-05 00:15:29.044446 | orchestrator | ok: [testbed-manager] 2026-03-05 00:15:29.044545 | orchestrator | 2026-03-05 00:15:29.044562 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-05 00:15:29.151362 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-05 00:15:29.151457 | orchestrator | 2026-03-05 00:15:29.151472 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-05 00:15:29.206247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-05 00:15:29.206418 | orchestrator | 2026-03-05 00:15:29.206434 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-05 00:15:30.338197 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-05 00:15:30.338299 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-05 00:15:30.338314 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-05 00:15:30.338326 | orchestrator | 2026-03-05 00:15:30.338338 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-05 00:15:32.163478 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-05 00:15:32.163542 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-05 00:15:32.163551 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-05 00:15:32.163559 | orchestrator | 2026-03-05 00:15:32.163568 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-05 00:15:32.812992 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-05 00:15:32.813096 | orchestrator | changed: [testbed-manager] 2026-03-05 00:15:32.813113 | orchestrator | 2026-03-05 00:15:32.813125 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-05 00:15:33.485345 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-05 00:15:33.485443 | orchestrator | changed: [testbed-manager] 2026-03-05 00:15:33.485460 | orchestrator | 2026-03-05 00:15:33.485472 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-05 00:15:33.543406 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:15:33.543494 | orchestrator | 2026-03-05 00:15:33.543510 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-05 00:15:33.919896 | orchestrator | ok: [testbed-manager] 2026-03-05 00:15:33.919972 | orchestrator | 2026-03-05 00:15:33.919984 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-05 00:15:33.999249 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-05 00:15:33.999345 | orchestrator | 2026-03-05 00:15:33.999361 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-05 00:15:35.138957 | orchestrator | changed: [testbed-manager] 2026-03-05 00:15:35.139049 | orchestrator | 2026-03-05 00:15:35.139065 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-05 00:15:36.043707 | orchestrator | changed: [testbed-manager] 2026-03-05 00:15:36.043802 | orchestrator | 2026-03-05 00:15:36.043848 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-05 00:15:46.918239 | orchestrator | changed: [testbed-manager] 2026-03-05 00:15:46.918298 | orchestrator | 2026-03-05 00:15:46.918314 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-05 00:15:46.973487 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:15:46.973598 | orchestrator | 2026-03-05 00:15:46.973619 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-05 00:15:46.973636 | orchestrator | 2026-03-05 00:15:46.973651 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-05 00:15:48.822390 | orchestrator | ok: [testbed-manager] 2026-03-05 00:15:48.822556 | orchestrator | 2026-03-05 00:15:48.822618 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-05 00:15:48.937210 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-05 00:15:48.937300 | orchestrator | 2026-03-05 00:15:48.937315 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-05 00:15:48.995964 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-05 00:15:48.996046 | orchestrator | 2026-03-05 00:15:48.996061 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-05 00:15:51.516975 | orchestrator | ok: [testbed-manager] 2026-03-05 00:15:51.517165 | orchestrator | 2026-03-05 00:15:51.517182 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-05 00:15:51.574153 | orchestrator | ok: [testbed-manager] 2026-03-05 00:15:51.574266 | orchestrator | 2026-03-05 00:15:51.574290 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-05 00:15:51.712470 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-05 00:15:51.712586 | orchestrator | 2026-03-05 00:15:51.712616 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-05 00:15:54.692011 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-05 00:15:54.692089 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-05 00:15:54.692096 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-05 00:15:54.692102 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-05 00:15:54.692108 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-05 00:15:54.692113 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-05 00:15:54.692118 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-05 00:15:54.692123 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-05 00:15:54.692128 | orchestrator | 2026-03-05 00:15:54.692134 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-05 00:15:55.337677 | orchestrator | changed: [testbed-manager] 2026-03-05 00:15:55.337737 | orchestrator | 2026-03-05 00:15:55.337743 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-05 00:15:55.979051 | orchestrator | changed: [testbed-manager] 2026-03-05 00:15:55.979160 | orchestrator | 2026-03-05 00:15:55.979176 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-05 00:15:56.074321 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-05 00:15:56.074398 | orchestrator | 2026-03-05 00:15:56.074409 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-05 00:15:57.329450 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-05 00:15:57.329543 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-05 00:15:57.329555 | orchestrator | 2026-03-05 00:15:57.329566 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-05 00:15:57.980604 | orchestrator | changed: [testbed-manager] 2026-03-05 00:15:57.980684 | orchestrator | 2026-03-05 00:15:57.980698 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-05 00:15:58.029382 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:15:58.029474 | orchestrator | 2026-03-05 00:15:58.029490 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-05 00:15:58.143346 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-05 00:15:58.143441 | orchestrator | 2026-03-05 00:15:58.143457 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-05 00:15:58.816316 | orchestrator | changed: [testbed-manager] 2026-03-05 00:15:58.816379 | orchestrator | 2026-03-05 00:15:58.816390 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-05 00:15:58.894649 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-05 00:15:58.894778 | orchestrator | 2026-03-05 00:15:58.894795 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-05 00:16:00.302586 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-05 00:16:00.302696 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-05 00:16:00.302713 | orchestrator | changed: [testbed-manager] 2026-03-05 00:16:00.302725 | orchestrator | 2026-03-05 00:16:00.302738 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-05 00:16:00.927294 | orchestrator | changed: [testbed-manager] 2026-03-05 00:16:00.927404 | orchestrator | 2026-03-05 00:16:00.927421 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-05 00:16:00.988309 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:16:00.988403 | orchestrator | 2026-03-05 00:16:00.988419 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-05 00:16:01.083912 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-05 00:16:01.084020 | orchestrator | 2026-03-05 00:16:01.084037 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-05 00:16:01.668650 | orchestrator | changed: [testbed-manager] 2026-03-05 00:16:01.668744 | orchestrator | 2026-03-05 00:16:01.668782 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-05 00:16:02.082446 | orchestrator | changed: [testbed-manager] 2026-03-05 00:16:02.082545 | orchestrator | 2026-03-05 00:16:02.082561 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-05 00:16:03.333502 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-05 00:16:03.333606 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-05 00:16:03.333621 | orchestrator | 2026-03-05 00:16:03.333634 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-05 00:16:03.993720 | orchestrator | changed: [testbed-manager] 2026-03-05 00:16:03.993828 | orchestrator | 2026-03-05 00:16:03.993843 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-05 00:16:04.412091 | orchestrator | ok: [testbed-manager] 2026-03-05 00:16:04.412205 | orchestrator | 2026-03-05 00:16:04.412230 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-05 00:16:04.798294 | orchestrator | changed: [testbed-manager] 2026-03-05 00:16:04.798392 | orchestrator | 2026-03-05 00:16:04.798409 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-05 00:16:04.853399 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:16:04.853511 | orchestrator | 2026-03-05 00:16:04.853537 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-05 00:16:04.921172 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-05 00:16:04.921267 | orchestrator | 2026-03-05 00:16:04.921282 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-05 00:16:04.971587 | orchestrator | ok: [testbed-manager] 2026-03-05 00:16:04.971667 | orchestrator | 2026-03-05 00:16:04.971679 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-05 00:16:07.016609 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-05 00:16:07.016697 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-05 00:16:07.016712 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-05 00:16:07.016724 | orchestrator | 2026-03-05 00:16:07.016737 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-05 00:16:07.702807 | orchestrator | changed: [testbed-manager] 2026-03-05 00:16:07.703002 | orchestrator | 2026-03-05 00:16:07.703020 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-05 00:16:08.434085 | orchestrator | changed: [testbed-manager] 2026-03-05 00:16:08.434140 | orchestrator | 2026-03-05 00:16:08.434148 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-05 00:16:09.188065 | orchestrator | changed: [testbed-manager] 2026-03-05 00:16:09.189071 | orchestrator | 2026-03-05 00:16:09.189119 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-05 00:16:09.271184 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-05 00:16:09.271268 | orchestrator | 2026-03-05 00:16:09.271284 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-05 00:16:09.321013 | orchestrator | ok: [testbed-manager] 2026-03-05 00:16:09.321092 | orchestrator | 2026-03-05 00:16:09.321105 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-05 00:16:10.037783 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-05 00:16:10.037907 | orchestrator | 2026-03-05 00:16:10.037932 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-05 00:16:10.127598 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-05 00:16:10.127679 | orchestrator | 2026-03-05 00:16:10.127692 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-05 00:16:10.877006 | orchestrator | changed: [testbed-manager] 2026-03-05 00:16:10.877070 | orchestrator | 2026-03-05 00:16:10.877077 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-05 00:16:11.467384 | orchestrator | ok: [testbed-manager] 2026-03-05 00:16:11.467470 | orchestrator | 2026-03-05 00:16:11.467486 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-05 00:16:11.521787 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:16:11.521925 | orchestrator | 2026-03-05 00:16:11.521942 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-05 00:16:11.582609 | orchestrator | ok: [testbed-manager] 2026-03-05 00:16:11.582697 | orchestrator | 2026-03-05 00:16:11.582712 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-05 00:16:12.441843 | orchestrator | changed: [testbed-manager] 2026-03-05 00:16:12.441963 | orchestrator | 2026-03-05 00:16:12.441985 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-05 00:17:25.346733 | orchestrator | changed: [testbed-manager] 2026-03-05 00:17:25.347033 | orchestrator | 2026-03-05 00:17:25.347059 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-05 00:17:26.349310 | orchestrator | ok: [testbed-manager] 2026-03-05 00:17:26.349392 | orchestrator | 2026-03-05 00:17:26.349402 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-05 00:17:26.413713 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:17:26.413774 | orchestrator | 2026-03-05 00:17:26.413780 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-05 00:17:28.989447 | orchestrator | changed: [testbed-manager] 2026-03-05 00:17:28.989543 | orchestrator | 2026-03-05 00:17:28.989560 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-05 00:17:29.100941 | orchestrator | ok: [testbed-manager] 2026-03-05 00:17:29.101066 | orchestrator | 2026-03-05 00:17:29.101109 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-05 00:17:29.101124 | orchestrator | 2026-03-05 00:17:29.101136 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-05 00:17:29.161633 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:17:29.161714 | orchestrator | 2026-03-05 00:17:29.161725 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-05 00:18:29.212221 | orchestrator | Pausing for 60 seconds 2026-03-05 00:18:29.212389 | orchestrator | changed: [testbed-manager] 2026-03-05 00:18:29.212407 | orchestrator | 2026-03-05 00:18:29.212420 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-05 00:18:32.295405 | orchestrator | changed: [testbed-manager] 2026-03-05 00:18:32.295511 | orchestrator | 2026-03-05 00:18:32.295528 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-05 00:19:34.386007 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-05 00:19:34.386137 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-05 00:19:34.386150 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-05 00:19:34.386176 | orchestrator | changed: [testbed-manager] 2026-03-05 00:19:34.386182 | orchestrator | 2026-03-05 00:19:34.386187 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-05 00:19:45.343344 | orchestrator | changed: [testbed-manager] 2026-03-05 00:19:45.343435 | orchestrator | 2026-03-05 00:19:45.343445 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-05 00:19:45.440492 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-05 00:19:45.440742 | orchestrator | 2026-03-05 00:19:45.440775 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-05 00:19:45.440796 | orchestrator | 2026-03-05 00:19:45.440816 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-05 00:19:45.490435 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:19:45.490528 | orchestrator | 2026-03-05 00:19:45.490542 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-05 00:19:45.571268 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-05 00:19:45.571364 | orchestrator | 2026-03-05 00:19:45.571379 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-05 00:19:46.372906 | orchestrator | changed: [testbed-manager] 2026-03-05 00:19:46.372993 | orchestrator | 2026-03-05 00:19:46.373005 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-05 00:19:49.659155 | orchestrator | ok: [testbed-manager] 2026-03-05 00:19:49.659257 | orchestrator | 2026-03-05 00:19:49.659274 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-05 00:19:49.739025 | orchestrator | ok: [testbed-manager] => { 2026-03-05 00:19:49.739154 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-05 00:19:49.739172 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-05 00:19:49.739187 | orchestrator | "Checking running containers against expected versions...", 2026-03-05 00:19:49.739200 | orchestrator | "", 2026-03-05 00:19:49.739212 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-05 00:19:49.739224 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-05 00:19:49.739235 | orchestrator | " Enabled: true", 2026-03-05 00:19:49.739246 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-05 00:19:49.739257 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:19:49.739269 | orchestrator | "", 2026-03-05 00:19:49.739280 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-05 00:19:49.739292 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-03-05 00:19:49.739303 | orchestrator | " Enabled: true", 2026-03-05 00:19:49.739314 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-03-05 00:19:49.739325 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:19:49.739336 | orchestrator | "", 2026-03-05 00:19:49.739347 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-05 00:19:49.739358 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-05 00:19:49.739369 | orchestrator | " Enabled: true", 2026-03-05 00:19:49.739380 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-05 00:19:49.739391 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:19:49.739402 | orchestrator | "", 2026-03-05 00:19:49.739413 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-05 00:19:49.739425 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-05 00:19:49.739436 | orchestrator | " Enabled: true", 2026-03-05 00:19:49.739447 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-05 00:19:49.739458 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:19:49.739469 | orchestrator | "", 2026-03-05 00:19:49.739480 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-05 00:19:49.739545 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-05 00:19:49.739559 | orchestrator | " Enabled: true", 2026-03-05 00:19:49.739572 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-05 00:19:49.739585 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:19:49.739599 | orchestrator | "", 2026-03-05 00:19:49.739611 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-05 00:19:49.739624 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-05 00:19:49.739636 | orchestrator | " Enabled: true", 2026-03-05 00:19:49.739649 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-05 00:19:49.739681 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:19:49.739694 | orchestrator | "", 2026-03-05 00:19:49.739707 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-05 00:19:49.739720 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-05 00:19:49.739733 | orchestrator | " Enabled: true", 2026-03-05 00:19:49.739746 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-05 00:19:49.739758 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:19:49.739769 | orchestrator | "", 2026-03-05 00:19:49.739780 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-05 00:19:49.739791 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-05 00:19:49.739802 | orchestrator | " Enabled: true", 2026-03-05 00:19:49.739813 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-05 00:19:49.739831 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:19:49.739843 | orchestrator | "", 2026-03-05 00:19:49.739854 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-05 00:19:49.739870 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-03-05 00:19:49.739882 | orchestrator | " Enabled: true", 2026-03-05 00:19:49.739893 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-03-05 00:19:49.739904 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:19:49.739915 | orchestrator | "", 2026-03-05 00:19:49.739926 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-05 00:19:49.739937 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-05 00:19:49.739948 | orchestrator | " Enabled: true", 2026-03-05 00:19:49.739959 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-05 00:19:49.739969 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:19:49.739980 | orchestrator | "", 2026-03-05 00:19:49.740008 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-05 00:19:49.740031 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-05 00:19:49.740043 | orchestrator | " Enabled: true", 2026-03-05 00:19:49.740054 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-05 00:19:49.740065 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:19:49.740076 | orchestrator | "", 2026-03-05 00:19:49.740087 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-05 00:19:49.740097 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-05 00:19:49.740108 | orchestrator | " Enabled: true", 2026-03-05 00:19:49.740131 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-05 00:19:49.740142 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:19:49.740153 | orchestrator | "", 2026-03-05 00:19:49.740167 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-05 00:19:49.740187 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-05 00:19:49.740207 | orchestrator | " Enabled: true", 2026-03-05 00:19:49.740224 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-05 00:19:49.740242 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:19:49.740260 | orchestrator | "", 2026-03-05 00:19:49.740280 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-05 00:19:49.740298 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-05 00:19:49.740310 | orchestrator | " Enabled: true", 2026-03-05 00:19:49.740332 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-05 00:19:49.740344 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:19:49.740354 | orchestrator | "", 2026-03-05 00:19:49.740365 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-05 00:19:49.740396 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-05 00:19:49.740407 | orchestrator | " Enabled: true", 2026-03-05 00:19:49.740418 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-05 00:19:49.740429 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:19:49.740439 | orchestrator | "", 2026-03-05 00:19:49.740450 | orchestrator | "=== Summary ===", 2026-03-05 00:19:49.740461 | orchestrator | "Errors (version mismatches): 0", 2026-03-05 00:19:49.740472 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-05 00:19:49.740483 | orchestrator | "", 2026-03-05 00:19:49.740493 | orchestrator | "✅ All running containers match expected versions!" 2026-03-05 00:19:49.740504 | orchestrator | ] 2026-03-05 00:19:49.740515 | orchestrator | } 2026-03-05 00:19:49.740527 | orchestrator | 2026-03-05 00:19:49.740538 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-05 00:19:49.790012 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:19:49.790199 | orchestrator | 2026-03-05 00:19:49.790217 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:19:49.790231 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-05 00:19:49.790281 | orchestrator | 2026-03-05 00:19:49.900917 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-05 00:19:49.901024 | orchestrator | + deactivate 2026-03-05 00:19:49.901052 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-05 00:19:49.901074 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-05 00:19:49.901092 | orchestrator | + export PATH 2026-03-05 00:19:49.901111 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-05 00:19:49.901129 | orchestrator | + '[' -n '' ']' 2026-03-05 00:19:49.901146 | orchestrator | + hash -r 2026-03-05 00:19:49.901165 | orchestrator | + '[' -n '' ']' 2026-03-05 00:19:49.901181 | orchestrator | + unset VIRTUAL_ENV 2026-03-05 00:19:49.901200 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-05 00:19:49.901217 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-05 00:19:49.901234 | orchestrator | + unset -f deactivate 2026-03-05 00:19:49.901254 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-05 00:19:49.907253 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-05 00:19:49.907324 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-05 00:19:49.907337 | orchestrator | + local max_attempts=60 2026-03-05 00:19:49.907348 | orchestrator | + local name=ceph-ansible 2026-03-05 00:19:49.907359 | orchestrator | + local attempt_num=1 2026-03-05 00:19:49.908111 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:19:49.943743 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:19:49.943830 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-05 00:19:49.943853 | orchestrator | + local max_attempts=60 2026-03-05 00:19:49.943872 | orchestrator | + local name=kolla-ansible 2026-03-05 00:19:49.943888 | orchestrator | + local attempt_num=1 2026-03-05 00:19:49.944488 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-05 00:19:49.979554 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:19:49.979755 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-05 00:19:49.979790 | orchestrator | + local max_attempts=60 2026-03-05 00:19:49.979809 | orchestrator | + local name=osism-ansible 2026-03-05 00:19:49.979827 | orchestrator | + local attempt_num=1 2026-03-05 00:19:49.980093 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-05 00:19:50.012446 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:19:50.012605 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-05 00:19:50.012623 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-05 00:19:50.731962 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-05 00:19:50.939836 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-05 00:19:50.939975 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-03-05 00:19:50.939994 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-03-05 00:19:50.940006 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-05 00:19:50.940095 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-03-05 00:19:50.940108 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-03-05 00:19:50.940119 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-03-05 00:19:50.940130 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-03-05 00:19:50.940160 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-03-05 00:19:50.940172 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-03-05 00:19:50.940183 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-03-05 00:19:50.940193 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-03-05 00:19:50.940204 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-03-05 00:19:50.940215 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-03-05 00:19:50.940226 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-03-05 00:19:50.940237 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-03-05 00:19:50.946453 | orchestrator | ++ semver latest 7.0.0 2026-03-05 00:19:50.996082 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-05 00:19:50.996174 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-05 00:19:50.996188 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-05 00:19:51.000747 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-05 00:20:03.223124 | orchestrator | 2026-03-05 00:20:03 | INFO  | Prepare task for execution of resolvconf. 2026-03-05 00:20:03.450449 | orchestrator | 2026-03-05 00:20:03 | INFO  | Task c45a3b1b-f9a6-48be-af17-7030d1564124 (resolvconf) was prepared for execution. 2026-03-05 00:20:03.450594 | orchestrator | 2026-03-05 00:20:03 | INFO  | It takes a moment until task c45a3b1b-f9a6-48be-af17-7030d1564124 (resolvconf) has been started and output is visible here. 2026-03-05 00:20:19.037348 | orchestrator | 2026-03-05 00:20:19.037462 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-05 00:20:19.037477 | orchestrator | 2026-03-05 00:20:19.037489 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-05 00:20:19.037499 | orchestrator | Thursday 05 March 2026 00:20:07 +0000 (0:00:00.127) 0:00:00.128 ******** 2026-03-05 00:20:19.037509 | orchestrator | ok: [testbed-manager] 2026-03-05 00:20:19.037521 | orchestrator | 2026-03-05 00:20:19.037531 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-05 00:20:19.037541 | orchestrator | Thursday 05 March 2026 00:20:11 +0000 (0:00:04.535) 0:00:04.663 ******** 2026-03-05 00:20:19.037551 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:20:19.037565 | orchestrator | 2026-03-05 00:20:19.037582 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-05 00:20:19.037599 | orchestrator | Thursday 05 March 2026 00:20:12 +0000 (0:00:00.068) 0:00:04.732 ******** 2026-03-05 00:20:19.037616 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-05 00:20:19.037632 | orchestrator | 2026-03-05 00:20:19.037648 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-05 00:20:19.037722 | orchestrator | Thursday 05 March 2026 00:20:12 +0000 (0:00:00.085) 0:00:04.817 ******** 2026-03-05 00:20:19.037740 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-05 00:20:19.037757 | orchestrator | 2026-03-05 00:20:19.037789 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-05 00:20:19.037808 | orchestrator | Thursday 05 March 2026 00:20:12 +0000 (0:00:00.093) 0:00:04.910 ******** 2026-03-05 00:20:19.037819 | orchestrator | ok: [testbed-manager] 2026-03-05 00:20:19.037829 | orchestrator | 2026-03-05 00:20:19.037840 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-05 00:20:19.037850 | orchestrator | Thursday 05 March 2026 00:20:13 +0000 (0:00:01.161) 0:00:06.071 ******** 2026-03-05 00:20:19.037860 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:20:19.037870 | orchestrator | 2026-03-05 00:20:19.037883 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-05 00:20:19.037894 | orchestrator | Thursday 05 March 2026 00:20:13 +0000 (0:00:00.059) 0:00:06.130 ******** 2026-03-05 00:20:19.037906 | orchestrator | ok: [testbed-manager] 2026-03-05 00:20:19.037918 | orchestrator | 2026-03-05 00:20:19.037930 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-05 00:20:19.037942 | orchestrator | Thursday 05 March 2026 00:20:14 +0000 (0:00:01.508) 0:00:07.639 ******** 2026-03-05 00:20:19.037954 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:20:19.037966 | orchestrator | 2026-03-05 00:20:19.037978 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-05 00:20:19.037991 | orchestrator | Thursday 05 March 2026 00:20:14 +0000 (0:00:00.071) 0:00:07.710 ******** 2026-03-05 00:20:19.038003 | orchestrator | changed: [testbed-manager] 2026-03-05 00:20:19.038014 | orchestrator | 2026-03-05 00:20:19.038085 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-05 00:20:19.038098 | orchestrator | Thursday 05 March 2026 00:20:15 +0000 (0:00:00.551) 0:00:08.261 ******** 2026-03-05 00:20:19.038109 | orchestrator | changed: [testbed-manager] 2026-03-05 00:20:19.038121 | orchestrator | 2026-03-05 00:20:19.038133 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-05 00:20:19.038145 | orchestrator | Thursday 05 March 2026 00:20:16 +0000 (0:00:01.083) 0:00:09.345 ******** 2026-03-05 00:20:19.038159 | orchestrator | ok: [testbed-manager] 2026-03-05 00:20:19.038197 | orchestrator | 2026-03-05 00:20:19.038209 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-05 00:20:19.038220 | orchestrator | Thursday 05 March 2026 00:20:17 +0000 (0:00:00.956) 0:00:10.301 ******** 2026-03-05 00:20:19.038231 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-05 00:20:19.038242 | orchestrator | 2026-03-05 00:20:19.038253 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-05 00:20:19.038263 | orchestrator | Thursday 05 March 2026 00:20:17 +0000 (0:00:00.074) 0:00:10.375 ******** 2026-03-05 00:20:19.038274 | orchestrator | changed: [testbed-manager] 2026-03-05 00:20:19.038285 | orchestrator | 2026-03-05 00:20:19.038296 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:20:19.038308 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-05 00:20:19.038319 | orchestrator | 2026-03-05 00:20:19.038330 | orchestrator | 2026-03-05 00:20:19.038341 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:20:19.038351 | orchestrator | Thursday 05 March 2026 00:20:18 +0000 (0:00:01.159) 0:00:11.534 ******** 2026-03-05 00:20:19.038362 | orchestrator | =============================================================================== 2026-03-05 00:20:19.038373 | orchestrator | Gathering Facts --------------------------------------------------------- 4.54s 2026-03-05 00:20:19.038384 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 1.51s 2026-03-05 00:20:19.038394 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.16s 2026-03-05 00:20:19.038405 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.16s 2026-03-05 00:20:19.038421 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.08s 2026-03-05 00:20:19.038441 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.96s 2026-03-05 00:20:19.038484 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.55s 2026-03-05 00:20:19.038505 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-03-05 00:20:19.038523 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-03-05 00:20:19.038541 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2026-03-05 00:20:19.038561 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2026-03-05 00:20:19.038579 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-03-05 00:20:19.038596 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-03-05 00:20:19.337784 | orchestrator | + osism apply sshconfig 2026-03-05 00:20:31.385794 | orchestrator | 2026-03-05 00:20:31 | INFO  | Prepare task for execution of sshconfig. 2026-03-05 00:20:31.461535 | orchestrator | 2026-03-05 00:20:31 | INFO  | Task 6da4cfcd-5410-438f-9feb-d1aae5f75680 (sshconfig) was prepared for execution. 2026-03-05 00:20:31.461651 | orchestrator | 2026-03-05 00:20:31 | INFO  | It takes a moment until task 6da4cfcd-5410-438f-9feb-d1aae5f75680 (sshconfig) has been started and output is visible here. 2026-03-05 00:20:43.331155 | orchestrator | 2026-03-05 00:20:43.331290 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-05 00:20:43.331307 | orchestrator | 2026-03-05 00:20:43.331319 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-05 00:20:43.331331 | orchestrator | Thursday 05 March 2026 00:20:35 +0000 (0:00:00.157) 0:00:00.157 ******** 2026-03-05 00:20:43.331342 | orchestrator | ok: [testbed-manager] 2026-03-05 00:20:43.331354 | orchestrator | 2026-03-05 00:20:43.331365 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-05 00:20:43.331405 | orchestrator | Thursday 05 March 2026 00:20:36 +0000 (0:00:00.544) 0:00:00.701 ******** 2026-03-05 00:20:43.331417 | orchestrator | changed: [testbed-manager] 2026-03-05 00:20:43.331429 | orchestrator | 2026-03-05 00:20:43.331440 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-05 00:20:43.331451 | orchestrator | Thursday 05 March 2026 00:20:36 +0000 (0:00:00.512) 0:00:01.214 ******** 2026-03-05 00:20:43.331461 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-05 00:20:43.331473 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-05 00:20:43.331484 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-05 00:20:43.331494 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-05 00:20:43.331505 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-05 00:20:43.331516 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-05 00:20:43.331527 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-05 00:20:43.331538 | orchestrator | 2026-03-05 00:20:43.331549 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-05 00:20:43.331560 | orchestrator | Thursday 05 March 2026 00:20:42 +0000 (0:00:05.755) 0:00:06.970 ******** 2026-03-05 00:20:43.331570 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:20:43.331581 | orchestrator | 2026-03-05 00:20:43.331592 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-05 00:20:43.331603 | orchestrator | Thursday 05 March 2026 00:20:42 +0000 (0:00:00.082) 0:00:07.052 ******** 2026-03-05 00:20:43.331614 | orchestrator | changed: [testbed-manager] 2026-03-05 00:20:43.331625 | orchestrator | 2026-03-05 00:20:43.331636 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:20:43.331648 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:20:43.331660 | orchestrator | 2026-03-05 00:20:43.331733 | orchestrator | 2026-03-05 00:20:43.331747 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:20:43.331761 | orchestrator | Thursday 05 March 2026 00:20:43 +0000 (0:00:00.555) 0:00:07.608 ******** 2026-03-05 00:20:43.331775 | orchestrator | =============================================================================== 2026-03-05 00:20:43.331788 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.76s 2026-03-05 00:20:43.331799 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.56s 2026-03-05 00:20:43.331810 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.54s 2026-03-05 00:20:43.331821 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.51s 2026-03-05 00:20:43.331832 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-03-05 00:20:43.700133 | orchestrator | + osism apply known-hosts 2026-03-05 00:20:55.737762 | orchestrator | 2026-03-05 00:20:55 | INFO  | Prepare task for execution of known-hosts. 2026-03-05 00:20:55.807983 | orchestrator | 2026-03-05 00:20:55 | INFO  | Task a3e4654e-cad1-4485-84c0-ae60e41f57da (known-hosts) was prepared for execution. 2026-03-05 00:20:55.808087 | orchestrator | 2026-03-05 00:20:55 | INFO  | It takes a moment until task a3e4654e-cad1-4485-84c0-ae60e41f57da (known-hosts) has been started and output is visible here. 2026-03-05 00:21:11.836329 | orchestrator | 2026-03-05 00:21:11.836448 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-05 00:21:11.836465 | orchestrator | 2026-03-05 00:21:11.836478 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-05 00:21:11.836491 | orchestrator | Thursday 05 March 2026 00:20:59 +0000 (0:00:00.163) 0:00:00.163 ******** 2026-03-05 00:21:11.836503 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-05 00:21:11.836515 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-05 00:21:11.836549 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-05 00:21:11.836561 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-05 00:21:11.836572 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-05 00:21:11.836584 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-05 00:21:11.836595 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-05 00:21:11.836606 | orchestrator | 2026-03-05 00:21:11.836618 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-05 00:21:11.836631 | orchestrator | Thursday 05 March 2026 00:21:05 +0000 (0:00:05.995) 0:00:06.158 ******** 2026-03-05 00:21:11.836653 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-05 00:21:11.836667 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-05 00:21:11.836770 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-05 00:21:11.836785 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-05 00:21:11.836796 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-05 00:21:11.836806 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-05 00:21:11.836817 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-05 00:21:11.836828 | orchestrator | 2026-03-05 00:21:11.836838 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:21:11.836849 | orchestrator | Thursday 05 March 2026 00:21:06 +0000 (0:00:00.156) 0:00:06.315 ******** 2026-03-05 00:21:11.836864 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDAyoSReAdR0O3Pt35jdyqL9UTgeTRIkyKUa8xUAL1kPgW6k0+EJgeOWZxVmRHo18K9g9+x3CZd9nQCjPZHRXEJiceaif87VcmZF1VRmHeUbpqAtC8exZo6nBJpIqZVXtkpoOmrJE0KspVwShYXLJOJlNfr92n8yP+pyswDwL+mYVLgqIDlvmGeJaGGg/lBth3Lx12/L83udHJiZ2x/3y7mHAzBqTqim9tg5lAP2qqgmIMmVYJkG7W/mXD9M7A1kCleCJ9peiWh/0RjhAfHaSQpQRp3lYNBewe1MqX/Y5w39NV62sRxXhS3YZ0eC1Lkgv0ioMXKBfI2QUP5vqtH0QR/k/m9M8Nmr/lp/xwzlUGxSUaMTXE/DLMdMb5Ar0PyvQ3P/v/RAziP+2QaGdmkn3BTrCJ2DA4kC4w075R3bkhmm8GFCN6fO8cFd1egSg3c35vpbcDS3URmRgjcgmNiyHxH2vrnA7KjcUzMrN/VirEZGoxX7Dom+5sLWiQMN9QiR0=) 2026-03-05 00:21:11.836879 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAiaQWac/LcYab+vU7QXYy6pXMub54BSkPIn1J449bJv) 2026-03-05 00:21:11.836893 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF8zjDfWfcqp8oFlTg3B92To0/eC4peCVmlMLYpccCeww+Wx4RHrgGll6XGHwPLCxI6nqBzLG6nqZmGurLReVnE=) 2026-03-05 00:21:11.836906 | orchestrator | 2026-03-05 00:21:11.836917 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:21:11.836928 | orchestrator | Thursday 05 March 2026 00:21:07 +0000 (0:00:01.157) 0:00:07.473 ******** 2026-03-05 00:21:11.836939 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAhmZ2t9JQrcNc9kGiVwSFaoyyQXhK8xrwZ61OVstgn6/3+zBpvNnH8Qw9QZ2XuxtQU96illLG2faFu0nISZyKo=) 2026-03-05 00:21:11.836988 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0oGMbbFNx4+laUg7/F+KHcazb7kyZYozWNl5ku+ZrUsfiOEr7EsRlSPQZMklbuevhQvp/pY/21/zAs0w6bmpUrNbVGDz8gpeCsFoWlPdJ91VEbInj6xLRmeeC21nun4q6F6mfcXGNBe1JXBsU8sqkPPFuot5DtoRXJyH2LiKxBaZtKvuOVrhwVoH+kX1N7h4803sZgmce5OdSGX5Vhi6yDOKZDohQJ/aEpGP8TIvYBRWxXB1ieeBgoPG0zQekY2eTH+U19/kYzlsIWqYqdAkwWRC//1VjypbqXgDQ8FGdN/fld7pBZ5/dh6LRbAw4uqD8Jqofqu65Pot6eFYaOvjJvYcvyUFWaFqMhVHnXwwKMn+W7eUYEIgyf/+gwS4gyFQOqqj14A2R34nJBVFUWQSv7o0hYWF4bqWwu557q1G+2V0F92JGi43BzHJQguXMFFwDAF39abIq686WMEKOBt2oaEHkoDAkUZg1HOW7b4yJTDz6JVKK+tkoYFdufBf+ZMM=) 2026-03-05 00:21:11.837002 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM4xImzHjbwDeDrPxZiLbGE/VRfpxzPdKbQcV579bpuG) 2026-03-05 00:21:11.837013 | orchestrator | 2026-03-05 00:21:11.837024 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:21:11.837035 | orchestrator | Thursday 05 March 2026 00:21:08 +0000 (0:00:01.042) 0:00:08.515 ******** 2026-03-05 00:21:11.837046 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICUVDKiKM1IKxvVYQ672k3Q1yfBjq46Oliyl6GaqEzVS) 2026-03-05 00:21:11.837057 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnfBjNqnXFpRUYdD+kz65QecFJO5Gyb9rqfNy1x4f8HiTrKkNuyKoSRNidTCvu34u/pFt6mFI6GNqTO6hKy43ZeRCTEhzw8lYvAmKmUBSgCBo3uHihe/SExE7hUf1WyTwtF/Ms7b65A/D2QXKS+ANuyIB8LaLgINwLYTyUnn3E1HeplkLNQrdSrDr84tDjxYd9G0HkTzGFA6NgKd/X0iCTcvO/EeAN/sj998Hd68aB3nnBLEgTUra5pPjZ2ocofLFi6poQvkohKeLu2UfRgfDLqOhLtdR+2gqzNY0m7lHkIB13oyqf79eBWYSuRCNjwXR0UKfS6MLS/RPVw6yQPErkWl2jBIz99l5OSxUC7s1zFi0iLyMR5kQgS0h+naXCmC9BdYX9bl10up65usRQymIBTmx4xuxq0S8PGyOJymz4RcwF5PE33uSaONHStGuHu5+n8FhsubPzcu23j8oOo+4w1FS3evbKxbeHb5a5Fs8iOP9SjfpAjAMU00xKFF41cx8=) 2026-03-05 00:21:11.837211 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPFfv/R9AmBwEEt8AGveQuOE/ClnRUEO/Dvej949mTIo2Ai0YaT789PI/bWCWKh1HazeiBLEpKwWl+vZ5PqDFyk=) 2026-03-05 00:21:11.837226 | orchestrator | 2026-03-05 00:21:11.837237 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:21:11.837248 | orchestrator | Thursday 05 March 2026 00:21:09 +0000 (0:00:01.039) 0:00:09.554 ******** 2026-03-05 00:21:11.837264 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH5eKquH7bWpeaHnAG8AuZs0uRXVLDzZI5n+TvNMdAx4) 2026-03-05 00:21:11.837276 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDF2dzsVh6G/35r49ILNmLrgMIvE72EWuvjgBXv1AoboGIgZEWTVL5Xd5rGvdDdCoZu9J6ynguD4Wr/T/AH+swc2jepI+2/6UOwod994ZaiHQYOXOXDh8FUuRT3UTYojCTaz37d5rGGSXs7LvDCcTbvX/rZ2pz3+LmjhsOeTYtSOu5o5sy05R74IO0TO7kGe+cKqbGfVtEIaUgrlv76cO2RM7uKnl/qU5bvDULpygFNrdquu+zv1dQJYfxncMHaUPonKxTW/f7egqoNKPXCpFsPPrJAU2Y9818DCCUMqwU2BmXAViZ3nPpHfRKZXN24r3tm118KkKWXR0e+QbuuRtZ1k63frmtBz/AEoXZns0I6OO0M2ZCKaHSOQSHWPJdviaT82ARsN8h+rlEd1YdHy/OqmtvXu6GOm8jCpCSUnSCWGUqPUp1eKGR4FIqI+JY0y+uA14EEn2BLAET5x1vCNSvxy0HqEDypPJ8f6eIX7hj5JIexC9E6gfR4i727/W4ojfU=) 2026-03-05 00:21:11.837287 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCLygicZT77zBknr/UbUb8UwhzPByixLaONFlThk4jKws9ymklxJnjKdf5LTwE3D5rjZ8RtlSUebTBypexNIJ6o=) 2026-03-05 00:21:11.837298 | orchestrator | 2026-03-05 00:21:11.837310 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:21:11.837320 | orchestrator | Thursday 05 March 2026 00:21:10 +0000 (0:00:01.087) 0:00:10.642 ******** 2026-03-05 00:21:11.837332 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuV9e00hz8FPI1HNTkDmShpdXLsQ2lJ1Dp7vMf9k11X24qO7bPmEB2FxVmHZU/dM95zOfZfWJCYVkvYSAeoK4ygZmKIx7EGiUvmSf0uUdwhuqpzj3HukMu76YhmLERtP5xvYRs798ODDqUpd/13SfZ0wX1i5KhrpSocGgaT5d9u2nMbTyRfon98FlE2lmJigV6oCJdCbjH70hae2ULViSM70TQmCNFi4Mxk6o6CTHOmOl90k3920Hy8dO+YlvE+I4yhGyFIBFy/JUWF6+Hd+qZSvu0VJNTy0HT3hFux1gkGlfjo1ev3bGqXoVnO/0PDt1IQ+PK6jwl5UH10DOq3iXUASErWJ3rX8pQzLe1RkzQUZWA1vjm9YvH/VzV+XYhwLh7a4DvlX6gMBOtEgHtPsnNfFceXYrBl4aSoWOmqm78G6PeGW3OOYMuwLQ7oSHcTl280uwGcK9/RSmHgpuddRy5RhsfKCuXiTwOOHE7VibW1Ah+QpGP+mVJH2ve5RKaj/0=) 2026-03-05 00:21:11.837351 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIOjmxHyFXNrEIwHqxrDmj+TzSFgmzUo37esin38HyKw86IB7HWJ38ji2NNxszi8mwOlqEmRWBDzHBe7JneMIKc=) 2026-03-05 00:21:11.837362 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAEZcPVvq3QPo1cJW7lqpVj/a/+63DIWs5/mcpFeXgC/) 2026-03-05 00:21:11.837373 | orchestrator | 2026-03-05 00:21:11.837384 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:21:11.837395 | orchestrator | Thursday 05 March 2026 00:21:11 +0000 (0:00:01.026) 0:00:11.669 ******** 2026-03-05 00:21:11.837413 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMsYiIa26H3xnQh/dCOwaaKaRG59BKIIcjbbL1mm4aoIjY/SIoJxXzgVX+FY/oKnkwW2gYpjH1a6DBImixIOgAE=) 2026-03-05 00:21:23.070843 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEZZrlxw6zibHfE96WbxLJKMAqr7h7EI6nOByTw9gK9har8c6ALYGAAba/0GcLxOYqM0l1nhG3F0ktRW1hQbrCEvPiCDAAyMHFT3ighQKIdqnYT4EKXIm067b+O1bzK5zGyO/P7Y9wjQhcgPOweChHeo8847A1SYF16EFBqXJp/Z29jl4k+FQr++Ma7fiIErXU2Y4erVzXKR2gCW0so5k5VA4yo99ZiWk8xZHbjaG1gRskrQ6cHQPXOZG54tr5Qch+YlA0cJxH37YeBOu4TLgsTslf8tbOos2yVc6jzbSUTVDGla5lfeXz7oE3qcS2nrhXc7RcNFNLrLrjtP7pH2JVUB0By8hvkBgLDn2j4a1+6q4K4081GhGHBK3tVLo+ldpmapnSjEbddJiFZ1MTh8FHo/LbvS3euri1ON52Kj0KwUSI9TzsNIuViw2pt41YSszUhzcxQHJhey1Wv1OBOn+RzEUfbiSXXzSIL1GVhfz1l8XGr/FGnZ98F4Nf+d31kUE=) 2026-03-05 00:21:23.070969 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIs0xXpGzUHxGKhic2mH/y8IJHq4bAiHDWSeqpD00ulc) 2026-03-05 00:21:23.070987 | orchestrator | 2026-03-05 00:21:23.071000 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:21:23.071013 | orchestrator | Thursday 05 March 2026 00:21:12 +0000 (0:00:01.118) 0:00:12.788 ******** 2026-03-05 00:21:23.071025 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMAw5F6/7FEm+n3Eu0sTTO8bC8wnC5M5g5qGMfs/GJ5tsLtqS+pTxmSzoBzaTJh4qTu7UYNeYMwopC1GR4Xha/I=) 2026-03-05 00:21:23.071040 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5zzRoAo/Fc4LLE142tcnA8uI4u8m/JM4K49D+vI9ijr6z8NRmIscJgHKiL5/Ak0HLdfp/9ZEkslbgQrTxia6iCqQJD2Wfq7Nz2R3nQYJZg0HjULO//qjF6MsHMkpdsXVG/4SuxBrzy9AimMD7H6+FT+kpi2ik6Gyv68kxCmhdmnp3gNdiHX0mexS6nnGoCDnB9a+tcqv0Jh6EXrpjDkyCsPuW3JMsm0xccFA+cimtrGckLO0eHZzpfUtQAY+ermvN+Vf1U7aMKJ8oYr6V8x8tI2CR0h56M9CBX3/TaHOOKNOI57oOV49froVJx5xJ3CLNxdo3/3QyLwuo3SToxq9qG6BvkUPeFB51V7kLDKHwVc+miuwDztMS+ETsO93NLOm72K+h7kCFo1sJpMsU15paSfWHklZOeDYw4af6zBM3zLRhcPVH4p/DYpiCYmmvLOrmJPYg9sA4uBb7Z0BYdnEF/8zKc2Lr9LfPMC67y/4F1/8VwRaDaaUH9PkL14TvvsE=) 2026-03-05 00:21:23.071052 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICq7Znlh1yla0rpagJaXvklQkeVKzczu78rFfqHxbRL1) 2026-03-05 00:21:23.071063 | orchestrator | 2026-03-05 00:21:23.071075 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-05 00:21:23.071087 | orchestrator | Thursday 05 March 2026 00:21:13 +0000 (0:00:01.099) 0:00:13.887 ******** 2026-03-05 00:21:23.071100 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-05 00:21:23.071112 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-05 00:21:23.071123 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-05 00:21:23.071134 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-05 00:21:23.071145 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-05 00:21:23.071175 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-05 00:21:23.071208 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-05 00:21:23.071220 | orchestrator | 2026-03-05 00:21:23.071231 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-05 00:21:23.071243 | orchestrator | Thursday 05 March 2026 00:21:18 +0000 (0:00:05.214) 0:00:19.101 ******** 2026-03-05 00:21:23.071255 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-05 00:21:23.071267 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-05 00:21:23.071278 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-05 00:21:23.071289 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-05 00:21:23.071300 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-05 00:21:23.071311 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-05 00:21:23.071322 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-05 00:21:23.071333 | orchestrator | 2026-03-05 00:21:23.071361 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:21:23.071373 | orchestrator | Thursday 05 March 2026 00:21:19 +0000 (0:00:00.175) 0:00:19.277 ******** 2026-03-05 00:21:23.071387 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDAyoSReAdR0O3Pt35jdyqL9UTgeTRIkyKUa8xUAL1kPgW6k0+EJgeOWZxVmRHo18K9g9+x3CZd9nQCjPZHRXEJiceaif87VcmZF1VRmHeUbpqAtC8exZo6nBJpIqZVXtkpoOmrJE0KspVwShYXLJOJlNfr92n8yP+pyswDwL+mYVLgqIDlvmGeJaGGg/lBth3Lx12/L83udHJiZ2x/3y7mHAzBqTqim9tg5lAP2qqgmIMmVYJkG7W/mXD9M7A1kCleCJ9peiWh/0RjhAfHaSQpQRp3lYNBewe1MqX/Y5w39NV62sRxXhS3YZ0eC1Lkgv0ioMXKBfI2QUP5vqtH0QR/k/m9M8Nmr/lp/xwzlUGxSUaMTXE/DLMdMb5Ar0PyvQ3P/v/RAziP+2QaGdmkn3BTrCJ2DA4kC4w075R3bkhmm8GFCN6fO8cFd1egSg3c35vpbcDS3URmRgjcgmNiyHxH2vrnA7KjcUzMrN/VirEZGoxX7Dom+5sLWiQMN9QiR0=) 2026-03-05 00:21:23.071400 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF8zjDfWfcqp8oFlTg3B92To0/eC4peCVmlMLYpccCeww+Wx4RHrgGll6XGHwPLCxI6nqBzLG6nqZmGurLReVnE=) 2026-03-05 00:21:23.071411 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAiaQWac/LcYab+vU7QXYy6pXMub54BSkPIn1J449bJv) 2026-03-05 00:21:23.071422 | orchestrator | 2026-03-05 00:21:23.071433 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:21:23.071443 | orchestrator | Thursday 05 March 2026 00:21:20 +0000 (0:00:01.100) 0:00:20.377 ******** 2026-03-05 00:21:23.071455 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0oGMbbFNx4+laUg7/F+KHcazb7kyZYozWNl5ku+ZrUsfiOEr7EsRlSPQZMklbuevhQvp/pY/21/zAs0w6bmpUrNbVGDz8gpeCsFoWlPdJ91VEbInj6xLRmeeC21nun4q6F6mfcXGNBe1JXBsU8sqkPPFuot5DtoRXJyH2LiKxBaZtKvuOVrhwVoH+kX1N7h4803sZgmce5OdSGX5Vhi6yDOKZDohQJ/aEpGP8TIvYBRWxXB1ieeBgoPG0zQekY2eTH+U19/kYzlsIWqYqdAkwWRC//1VjypbqXgDQ8FGdN/fld7pBZ5/dh6LRbAw4uqD8Jqofqu65Pot6eFYaOvjJvYcvyUFWaFqMhVHnXwwKMn+W7eUYEIgyf/+gwS4gyFQOqqj14A2R34nJBVFUWQSv7o0hYWF4bqWwu557q1G+2V0F92JGi43BzHJQguXMFFwDAF39abIq686WMEKOBt2oaEHkoDAkUZg1HOW7b4yJTDz6JVKK+tkoYFdufBf+ZMM=) 2026-03-05 00:21:23.071474 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAhmZ2t9JQrcNc9kGiVwSFaoyyQXhK8xrwZ61OVstgn6/3+zBpvNnH8Qw9QZ2XuxtQU96illLG2faFu0nISZyKo=) 2026-03-05 00:21:23.071486 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM4xImzHjbwDeDrPxZiLbGE/VRfpxzPdKbQcV579bpuG) 2026-03-05 00:21:23.071496 | orchestrator | 2026-03-05 00:21:23.071507 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:21:23.071518 | orchestrator | Thursday 05 March 2026 00:21:21 +0000 (0:00:01.130) 0:00:21.508 ******** 2026-03-05 00:21:23.071530 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPFfv/R9AmBwEEt8AGveQuOE/ClnRUEO/Dvej949mTIo2Ai0YaT789PI/bWCWKh1HazeiBLEpKwWl+vZ5PqDFyk=) 2026-03-05 00:21:23.071542 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnfBjNqnXFpRUYdD+kz65QecFJO5Gyb9rqfNy1x4f8HiTrKkNuyKoSRNidTCvu34u/pFt6mFI6GNqTO6hKy43ZeRCTEhzw8lYvAmKmUBSgCBo3uHihe/SExE7hUf1WyTwtF/Ms7b65A/D2QXKS+ANuyIB8LaLgINwLYTyUnn3E1HeplkLNQrdSrDr84tDjxYd9G0HkTzGFA6NgKd/X0iCTcvO/EeAN/sj998Hd68aB3nnBLEgTUra5pPjZ2ocofLFi6poQvkohKeLu2UfRgfDLqOhLtdR+2gqzNY0m7lHkIB13oyqf79eBWYSuRCNjwXR0UKfS6MLS/RPVw6yQPErkWl2jBIz99l5OSxUC7s1zFi0iLyMR5kQgS0h+naXCmC9BdYX9bl10up65usRQymIBTmx4xuxq0S8PGyOJymz4RcwF5PE33uSaONHStGuHu5+n8FhsubPzcu23j8oOo+4w1FS3evbKxbeHb5a5Fs8iOP9SjfpAjAMU00xKFF41cx8=) 2026-03-05 00:21:23.071553 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICUVDKiKM1IKxvVYQ672k3Q1yfBjq46Oliyl6GaqEzVS) 2026-03-05 00:21:23.071564 | orchestrator | 2026-03-05 00:21:23.071575 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:21:23.071586 | orchestrator | Thursday 05 March 2026 00:21:22 +0000 (0:00:01.076) 0:00:22.584 ******** 2026-03-05 00:21:23.071597 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH5eKquH7bWpeaHnAG8AuZs0uRXVLDzZI5n+TvNMdAx4) 2026-03-05 00:21:23.071629 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDF2dzsVh6G/35r49ILNmLrgMIvE72EWuvjgBXv1AoboGIgZEWTVL5Xd5rGvdDdCoZu9J6ynguD4Wr/T/AH+swc2jepI+2/6UOwod994ZaiHQYOXOXDh8FUuRT3UTYojCTaz37d5rGGSXs7LvDCcTbvX/rZ2pz3+LmjhsOeTYtSOu5o5sy05R74IO0TO7kGe+cKqbGfVtEIaUgrlv76cO2RM7uKnl/qU5bvDULpygFNrdquu+zv1dQJYfxncMHaUPonKxTW/f7egqoNKPXCpFsPPrJAU2Y9818DCCUMqwU2BmXAViZ3nPpHfRKZXN24r3tm118KkKWXR0e+QbuuRtZ1k63frmtBz/AEoXZns0I6OO0M2ZCKaHSOQSHWPJdviaT82ARsN8h+rlEd1YdHy/OqmtvXu6GOm8jCpCSUnSCWGUqPUp1eKGR4FIqI+JY0y+uA14EEn2BLAET5x1vCNSvxy0HqEDypPJ8f6eIX7hj5JIexC9E6gfR4i727/W4ojfU=) 2026-03-05 00:21:27.882862 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCLygicZT77zBknr/UbUb8UwhzPByixLaONFlThk4jKws9ymklxJnjKdf5LTwE3D5rjZ8RtlSUebTBypexNIJ6o=) 2026-03-05 00:21:27.883022 | orchestrator | 2026-03-05 00:21:27.883070 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:21:27.883085 | orchestrator | Thursday 05 March 2026 00:21:23 +0000 (0:00:01.057) 0:00:23.642 ******** 2026-03-05 00:21:27.883096 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIOjmxHyFXNrEIwHqxrDmj+TzSFgmzUo37esin38HyKw86IB7HWJ38ji2NNxszi8mwOlqEmRWBDzHBe7JneMIKc=) 2026-03-05 00:21:27.883111 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuV9e00hz8FPI1HNTkDmShpdXLsQ2lJ1Dp7vMf9k11X24qO7bPmEB2FxVmHZU/dM95zOfZfWJCYVkvYSAeoK4ygZmKIx7EGiUvmSf0uUdwhuqpzj3HukMu76YhmLERtP5xvYRs798ODDqUpd/13SfZ0wX1i5KhrpSocGgaT5d9u2nMbTyRfon98FlE2lmJigV6oCJdCbjH70hae2ULViSM70TQmCNFi4Mxk6o6CTHOmOl90k3920Hy8dO+YlvE+I4yhGyFIBFy/JUWF6+Hd+qZSvu0VJNTy0HT3hFux1gkGlfjo1ev3bGqXoVnO/0PDt1IQ+PK6jwl5UH10DOq3iXUASErWJ3rX8pQzLe1RkzQUZWA1vjm9YvH/VzV+XYhwLh7a4DvlX6gMBOtEgHtPsnNfFceXYrBl4aSoWOmqm78G6PeGW3OOYMuwLQ7oSHcTl280uwGcK9/RSmHgpuddRy5RhsfKCuXiTwOOHE7VibW1Ah+QpGP+mVJH2ve5RKaj/0=) 2026-03-05 00:21:27.883155 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAEZcPVvq3QPo1cJW7lqpVj/a/+63DIWs5/mcpFeXgC/) 2026-03-05 00:21:27.883169 | orchestrator | 2026-03-05 00:21:27.883195 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:21:27.883207 | orchestrator | Thursday 05 March 2026 00:21:24 +0000 (0:00:01.079) 0:00:24.722 ******** 2026-03-05 00:21:27.883218 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMsYiIa26H3xnQh/dCOwaaKaRG59BKIIcjbbL1mm4aoIjY/SIoJxXzgVX+FY/oKnkwW2gYpjH1a6DBImixIOgAE=) 2026-03-05 00:21:27.883230 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEZZrlxw6zibHfE96WbxLJKMAqr7h7EI6nOByTw9gK9har8c6ALYGAAba/0GcLxOYqM0l1nhG3F0ktRW1hQbrCEvPiCDAAyMHFT3ighQKIdqnYT4EKXIm067b+O1bzK5zGyO/P7Y9wjQhcgPOweChHeo8847A1SYF16EFBqXJp/Z29jl4k+FQr++Ma7fiIErXU2Y4erVzXKR2gCW0so5k5VA4yo99ZiWk8xZHbjaG1gRskrQ6cHQPXOZG54tr5Qch+YlA0cJxH37YeBOu4TLgsTslf8tbOos2yVc6jzbSUTVDGla5lfeXz7oE3qcS2nrhXc7RcNFNLrLrjtP7pH2JVUB0By8hvkBgLDn2j4a1+6q4K4081GhGHBK3tVLo+ldpmapnSjEbddJiFZ1MTh8FHo/LbvS3euri1ON52Kj0KwUSI9TzsNIuViw2pt41YSszUhzcxQHJhey1Wv1OBOn+RzEUfbiSXXzSIL1GVhfz1l8XGr/FGnZ98F4Nf+d31kUE=) 2026-03-05 00:21:27.883242 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIs0xXpGzUHxGKhic2mH/y8IJHq4bAiHDWSeqpD00ulc) 2026-03-05 00:21:27.883254 | orchestrator | 2026-03-05 00:21:27.883273 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:21:27.883303 | orchestrator | Thursday 05 March 2026 00:21:25 +0000 (0:00:01.076) 0:00:25.799 ******** 2026-03-05 00:21:27.883322 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICq7Znlh1yla0rpagJaXvklQkeVKzczu78rFfqHxbRL1) 2026-03-05 00:21:27.883342 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5zzRoAo/Fc4LLE142tcnA8uI4u8m/JM4K49D+vI9ijr6z8NRmIscJgHKiL5/Ak0HLdfp/9ZEkslbgQrTxia6iCqQJD2Wfq7Nz2R3nQYJZg0HjULO//qjF6MsHMkpdsXVG/4SuxBrzy9AimMD7H6+FT+kpi2ik6Gyv68kxCmhdmnp3gNdiHX0mexS6nnGoCDnB9a+tcqv0Jh6EXrpjDkyCsPuW3JMsm0xccFA+cimtrGckLO0eHZzpfUtQAY+ermvN+Vf1U7aMKJ8oYr6V8x8tI2CR0h56M9CBX3/TaHOOKNOI57oOV49froVJx5xJ3CLNxdo3/3QyLwuo3SToxq9qG6BvkUPeFB51V7kLDKHwVc+miuwDztMS+ETsO93NLOm72K+h7kCFo1sJpMsU15paSfWHklZOeDYw4af6zBM3zLRhcPVH4p/DYpiCYmmvLOrmJPYg9sA4uBb7Z0BYdnEF/8zKc2Lr9LfPMC67y/4F1/8VwRaDaaUH9PkL14TvvsE=) 2026-03-05 00:21:27.883362 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMAw5F6/7FEm+n3Eu0sTTO8bC8wnC5M5g5qGMfs/GJ5tsLtqS+pTxmSzoBzaTJh4qTu7UYNeYMwopC1GR4Xha/I=) 2026-03-05 00:21:27.883381 | orchestrator | 2026-03-05 00:21:27.883398 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-05 00:21:27.883418 | orchestrator | Thursday 05 March 2026 00:21:26 +0000 (0:00:01.079) 0:00:26.878 ******** 2026-03-05 00:21:27.883437 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-05 00:21:27.883456 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-05 00:21:27.883472 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-05 00:21:27.883491 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-05 00:21:27.883536 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-05 00:21:27.883592 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-05 00:21:27.883612 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-05 00:21:27.883632 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:21:27.883650 | orchestrator | 2026-03-05 00:21:27.883691 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-05 00:21:27.883713 | orchestrator | Thursday 05 March 2026 00:21:26 +0000 (0:00:00.179) 0:00:27.057 ******** 2026-03-05 00:21:27.883746 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:21:27.883758 | orchestrator | 2026-03-05 00:21:27.883769 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-05 00:21:27.883779 | orchestrator | Thursday 05 March 2026 00:21:26 +0000 (0:00:00.055) 0:00:27.112 ******** 2026-03-05 00:21:27.883793 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:21:27.883812 | orchestrator | 2026-03-05 00:21:27.883830 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-05 00:21:27.883847 | orchestrator | Thursday 05 March 2026 00:21:26 +0000 (0:00:00.056) 0:00:27.169 ******** 2026-03-05 00:21:27.883866 | orchestrator | changed: [testbed-manager] 2026-03-05 00:21:27.883883 | orchestrator | 2026-03-05 00:21:27.883901 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:21:27.883919 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-05 00:21:27.883938 | orchestrator | 2026-03-05 00:21:27.883956 | orchestrator | 2026-03-05 00:21:27.883975 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:21:27.883995 | orchestrator | Thursday 05 March 2026 00:21:27 +0000 (0:00:00.719) 0:00:27.888 ******** 2026-03-05 00:21:27.884013 | orchestrator | =============================================================================== 2026-03-05 00:21:27.884029 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.00s 2026-03-05 00:21:27.884040 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.21s 2026-03-05 00:21:27.884052 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-03-05 00:21:27.884063 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-03-05 00:21:27.884074 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-03-05 00:21:27.884085 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-03-05 00:21:27.884095 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-03-05 00:21:27.884106 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-03-05 00:21:27.884117 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-03-05 00:21:27.884128 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-03-05 00:21:27.884138 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-03-05 00:21:27.884149 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-03-05 00:21:27.884160 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-05 00:21:27.884181 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-05 00:21:27.884192 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-05 00:21:27.884203 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-05 00:21:27.884214 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.72s 2026-03-05 00:21:27.884225 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2026-03-05 00:21:27.884236 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-03-05 00:21:27.884247 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-03-05 00:21:28.198720 | orchestrator | + osism apply squid 2026-03-05 00:21:40.265486 | orchestrator | 2026-03-05 00:21:40 | INFO  | Prepare task for execution of squid. 2026-03-05 00:21:40.338378 | orchestrator | 2026-03-05 00:21:40 | INFO  | Task 0596ddce-6aeb-490e-b7a5-ec506e8a68b5 (squid) was prepared for execution. 2026-03-05 00:21:40.338477 | orchestrator | 2026-03-05 00:21:40 | INFO  | It takes a moment until task 0596ddce-6aeb-490e-b7a5-ec506e8a68b5 (squid) has been started and output is visible here. 2026-03-05 00:23:38.243504 | orchestrator | 2026-03-05 00:23:38.243616 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-05 00:23:38.243629 | orchestrator | 2026-03-05 00:23:38.243636 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-05 00:23:38.243643 | orchestrator | Thursday 05 March 2026 00:21:44 +0000 (0:00:00.164) 0:00:00.164 ******** 2026-03-05 00:23:38.243650 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-05 00:23:38.243658 | orchestrator | 2026-03-05 00:23:38.243665 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-05 00:23:38.243697 | orchestrator | Thursday 05 March 2026 00:21:44 +0000 (0:00:00.105) 0:00:00.270 ******** 2026-03-05 00:23:38.243708 | orchestrator | ok: [testbed-manager] 2026-03-05 00:23:38.243721 | orchestrator | 2026-03-05 00:23:38.243731 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-05 00:23:38.243743 | orchestrator | Thursday 05 March 2026 00:21:46 +0000 (0:00:01.505) 0:00:01.775 ******** 2026-03-05 00:23:38.243786 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-05 00:23:38.243795 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-05 00:23:38.243803 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-05 00:23:38.243810 | orchestrator | 2026-03-05 00:23:38.243817 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-05 00:23:38.243824 | orchestrator | Thursday 05 March 2026 00:21:47 +0000 (0:00:01.216) 0:00:02.991 ******** 2026-03-05 00:23:38.243831 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-05 00:23:38.243837 | orchestrator | 2026-03-05 00:23:38.243844 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-05 00:23:38.243850 | orchestrator | Thursday 05 March 2026 00:21:48 +0000 (0:00:01.082) 0:00:04.074 ******** 2026-03-05 00:23:38.243857 | orchestrator | ok: [testbed-manager] 2026-03-05 00:23:38.243863 | orchestrator | 2026-03-05 00:23:38.243869 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-05 00:23:38.243875 | orchestrator | Thursday 05 March 2026 00:21:48 +0000 (0:00:00.349) 0:00:04.423 ******** 2026-03-05 00:23:38.243881 | orchestrator | changed: [testbed-manager] 2026-03-05 00:23:38.243888 | orchestrator | 2026-03-05 00:23:38.243894 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-05 00:23:38.243900 | orchestrator | Thursday 05 March 2026 00:21:49 +0000 (0:00:00.925) 0:00:05.349 ******** 2026-03-05 00:23:38.243907 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-05 00:23:38.243914 | orchestrator | ok: [testbed-manager] 2026-03-05 00:23:38.243920 | orchestrator | 2026-03-05 00:23:38.243926 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-05 00:23:38.243932 | orchestrator | Thursday 05 March 2026 00:22:24 +0000 (0:00:35.166) 0:00:40.516 ******** 2026-03-05 00:23:38.243939 | orchestrator | changed: [testbed-manager] 2026-03-05 00:23:38.243945 | orchestrator | 2026-03-05 00:23:38.243957 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-05 00:23:38.243964 | orchestrator | Thursday 05 March 2026 00:22:37 +0000 (0:00:12.236) 0:00:52.752 ******** 2026-03-05 00:23:38.243970 | orchestrator | Pausing for 60 seconds 2026-03-05 00:23:38.243977 | orchestrator | changed: [testbed-manager] 2026-03-05 00:23:38.243983 | orchestrator | 2026-03-05 00:23:38.243989 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-05 00:23:38.243995 | orchestrator | Thursday 05 March 2026 00:23:37 +0000 (0:01:00.080) 0:01:52.833 ******** 2026-03-05 00:23:38.244002 | orchestrator | ok: [testbed-manager] 2026-03-05 00:23:38.244008 | orchestrator | 2026-03-05 00:23:38.244014 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-05 00:23:38.244034 | orchestrator | Thursday 05 March 2026 00:23:37 +0000 (0:00:00.065) 0:01:52.899 ******** 2026-03-05 00:23:38.244041 | orchestrator | changed: [testbed-manager] 2026-03-05 00:23:38.244047 | orchestrator | 2026-03-05 00:23:38.244054 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:23:38.244061 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:23:38.244068 | orchestrator | 2026-03-05 00:23:38.244076 | orchestrator | 2026-03-05 00:23:38.244083 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:23:38.244090 | orchestrator | Thursday 05 March 2026 00:23:37 +0000 (0:00:00.599) 0:01:53.499 ******** 2026-03-05 00:23:38.244098 | orchestrator | =============================================================================== 2026-03-05 00:23:38.244105 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-03-05 00:23:38.244113 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 35.17s 2026-03-05 00:23:38.244120 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.24s 2026-03-05 00:23:38.244127 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.51s 2026-03-05 00:23:38.244134 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.22s 2026-03-05 00:23:38.244142 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.08s 2026-03-05 00:23:38.244149 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.93s 2026-03-05 00:23:38.244156 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2026-03-05 00:23:38.244163 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2026-03-05 00:23:38.244170 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.11s 2026-03-05 00:23:38.244177 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-03-05 00:23:38.557179 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-05 00:23:38.557268 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-03-05 00:23:38.562837 | orchestrator | + set -e 2026-03-05 00:23:38.562889 | orchestrator | + NAMESPACE=kolla 2026-03-05 00:23:38.562900 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-05 00:23:38.569812 | orchestrator | ++ semver latest 9.0.0 2026-03-05 00:23:38.628861 | orchestrator | + [[ -1 -lt 0 ]] 2026-03-05 00:23:38.628957 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-05 00:23:38.629481 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-05 00:23:50.711129 | orchestrator | 2026-03-05 00:23:50 | INFO  | Prepare task for execution of operator. 2026-03-05 00:23:50.794245 | orchestrator | 2026-03-05 00:23:50 | INFO  | Task 288d7e53-da21-4003-aa18-591ba19c745b (operator) was prepared for execution. 2026-03-05 00:23:50.794340 | orchestrator | 2026-03-05 00:23:50 | INFO  | It takes a moment until task 288d7e53-da21-4003-aa18-591ba19c745b (operator) has been started and output is visible here. 2026-03-05 00:24:06.812882 | orchestrator | 2026-03-05 00:24:06.813007 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-05 00:24:06.813024 | orchestrator | 2026-03-05 00:24:06.813036 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-05 00:24:06.813048 | orchestrator | Thursday 05 March 2026 00:23:55 +0000 (0:00:00.149) 0:00:00.149 ******** 2026-03-05 00:24:06.813060 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:24:06.813072 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:24:06.813083 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:24:06.813093 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:24:06.813104 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:24:06.813119 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:24:06.813131 | orchestrator | 2026-03-05 00:24:06.813142 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-05 00:24:06.813177 | orchestrator | Thursday 05 March 2026 00:23:58 +0000 (0:00:03.213) 0:00:03.362 ******** 2026-03-05 00:24:06.813189 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:24:06.813200 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:24:06.813211 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:24:06.813221 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:24:06.813232 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:24:06.813243 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:24:06.813253 | orchestrator | 2026-03-05 00:24:06.813264 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-05 00:24:06.813275 | orchestrator | 2026-03-05 00:24:06.813286 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-05 00:24:06.813297 | orchestrator | Thursday 05 March 2026 00:23:59 +0000 (0:00:00.796) 0:00:04.159 ******** 2026-03-05 00:24:06.813335 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:24:06.813346 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:24:06.813359 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:24:06.813372 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:24:06.813385 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:24:06.813397 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:24:06.813410 | orchestrator | 2026-03-05 00:24:06.813422 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-05 00:24:06.813435 | orchestrator | Thursday 05 March 2026 00:23:59 +0000 (0:00:00.174) 0:00:04.333 ******** 2026-03-05 00:24:06.813449 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:24:06.813462 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:24:06.813475 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:24:06.813487 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:24:06.813516 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:24:06.813528 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:24:06.813539 | orchestrator | 2026-03-05 00:24:06.813550 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-05 00:24:06.813561 | orchestrator | Thursday 05 March 2026 00:23:59 +0000 (0:00:00.197) 0:00:04.531 ******** 2026-03-05 00:24:06.813572 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:24:06.813583 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:24:06.813594 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:24:06.813604 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:24:06.813615 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:24:06.813626 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:24:06.813637 | orchestrator | 2026-03-05 00:24:06.813648 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-05 00:24:06.813659 | orchestrator | Thursday 05 March 2026 00:24:00 +0000 (0:00:00.609) 0:00:05.141 ******** 2026-03-05 00:24:06.813669 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:24:06.813680 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:24:06.813690 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:24:06.813701 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:24:06.813731 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:24:06.813742 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:24:06.813753 | orchestrator | 2026-03-05 00:24:06.813764 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-05 00:24:06.813775 | orchestrator | Thursday 05 March 2026 00:24:00 +0000 (0:00:00.857) 0:00:05.999 ******** 2026-03-05 00:24:06.813786 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-05 00:24:06.813797 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-05 00:24:06.813808 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-05 00:24:06.813818 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-05 00:24:06.813829 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-05 00:24:06.813840 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-05 00:24:06.813851 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-05 00:24:06.813861 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-05 00:24:06.813881 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-05 00:24:06.813892 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-05 00:24:06.813902 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-05 00:24:06.813913 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-05 00:24:06.813924 | orchestrator | 2026-03-05 00:24:06.813935 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-05 00:24:06.813946 | orchestrator | Thursday 05 March 2026 00:24:02 +0000 (0:00:01.180) 0:00:07.180 ******** 2026-03-05 00:24:06.813957 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:24:06.813968 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:24:06.813978 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:24:06.813989 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:24:06.813999 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:24:06.814010 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:24:06.814083 | orchestrator | 2026-03-05 00:24:06.814095 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-05 00:24:06.814107 | orchestrator | Thursday 05 March 2026 00:24:03 +0000 (0:00:01.214) 0:00:08.394 ******** 2026-03-05 00:24:06.814118 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-05 00:24:06.814129 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-05 00:24:06.814140 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-05 00:24:06.814151 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-05 00:24:06.814162 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-05 00:24:06.814190 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-05 00:24:06.814202 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-05 00:24:06.814213 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-05 00:24:06.814224 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-05 00:24:06.814235 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-05 00:24:06.814246 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-05 00:24:06.814256 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-05 00:24:06.814267 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-05 00:24:06.814277 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-05 00:24:06.814288 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-05 00:24:06.814299 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-05 00:24:06.814310 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-05 00:24:06.814320 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-05 00:24:06.814331 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-05 00:24:06.814342 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-05 00:24:06.814352 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-05 00:24:06.814363 | orchestrator | 2026-03-05 00:24:06.814374 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-05 00:24:06.814385 | orchestrator | Thursday 05 March 2026 00:24:04 +0000 (0:00:01.222) 0:00:09.617 ******** 2026-03-05 00:24:06.814396 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:24:06.814406 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:24:06.814417 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:24:06.814434 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:24:06.814445 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:24:06.814456 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:24:06.814467 | orchestrator | 2026-03-05 00:24:06.814478 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-05 00:24:06.814496 | orchestrator | Thursday 05 March 2026 00:24:04 +0000 (0:00:00.175) 0:00:09.792 ******** 2026-03-05 00:24:06.814506 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:24:06.814517 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:24:06.814528 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:24:06.814539 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:24:06.814549 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:24:06.814560 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:24:06.814571 | orchestrator | 2026-03-05 00:24:06.814582 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-05 00:24:06.814593 | orchestrator | Thursday 05 March 2026 00:24:04 +0000 (0:00:00.207) 0:00:09.999 ******** 2026-03-05 00:24:06.814604 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:24:06.814615 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:24:06.814625 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:24:06.814636 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:24:06.814646 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:24:06.814657 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:24:06.814668 | orchestrator | 2026-03-05 00:24:06.814679 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-05 00:24:06.814690 | orchestrator | Thursday 05 March 2026 00:24:05 +0000 (0:00:00.573) 0:00:10.573 ******** 2026-03-05 00:24:06.814701 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:24:06.814820 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:24:06.814836 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:24:06.814847 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:24:06.814858 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:24:06.814868 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:24:06.814879 | orchestrator | 2026-03-05 00:24:06.814890 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-05 00:24:06.814901 | orchestrator | Thursday 05 March 2026 00:24:05 +0000 (0:00:00.199) 0:00:10.773 ******** 2026-03-05 00:24:06.814912 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-05 00:24:06.814923 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:24:06.814934 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-05 00:24:06.814945 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:24:06.814955 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-05 00:24:06.814994 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-05 00:24:06.815005 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-05 00:24:06.815016 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:24:06.815027 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-05 00:24:06.815038 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:24:06.815048 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:24:06.815059 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:24:06.815070 | orchestrator | 2026-03-05 00:24:06.815080 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-05 00:24:06.815091 | orchestrator | Thursday 05 March 2026 00:24:06 +0000 (0:00:00.689) 0:00:11.463 ******** 2026-03-05 00:24:06.815102 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:24:06.815112 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:24:06.815123 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:24:06.815134 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:24:06.815144 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:24:06.815155 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:24:06.815165 | orchestrator | 2026-03-05 00:24:06.815176 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-05 00:24:06.815187 | orchestrator | Thursday 05 March 2026 00:24:06 +0000 (0:00:00.223) 0:00:11.687 ******** 2026-03-05 00:24:06.815198 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:24:06.815209 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:24:06.815219 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:24:06.815238 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:24:06.815258 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:24:08.126416 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:24:08.126541 | orchestrator | 2026-03-05 00:24:08.126559 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-05 00:24:08.126572 | orchestrator | Thursday 05 March 2026 00:24:06 +0000 (0:00:00.192) 0:00:11.879 ******** 2026-03-05 00:24:08.126584 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:24:08.126594 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:24:08.127316 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:24:08.127338 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:24:08.127351 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:24:08.127362 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:24:08.127372 | orchestrator | 2026-03-05 00:24:08.127384 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-05 00:24:08.127395 | orchestrator | Thursday 05 March 2026 00:24:06 +0000 (0:00:00.150) 0:00:12.030 ******** 2026-03-05 00:24:08.127406 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:24:08.127417 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:24:08.127428 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:24:08.127439 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:24:08.127449 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:24:08.127460 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:24:08.127471 | orchestrator | 2026-03-05 00:24:08.127481 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-05 00:24:08.127492 | orchestrator | Thursday 05 March 2026 00:24:07 +0000 (0:00:00.660) 0:00:12.690 ******** 2026-03-05 00:24:08.127503 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:24:08.127513 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:24:08.127524 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:24:08.127535 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:24:08.127545 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:24:08.127556 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:24:08.127567 | orchestrator | 2026-03-05 00:24:08.127578 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:24:08.127590 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 00:24:08.127623 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 00:24:08.127635 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 00:24:08.127646 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 00:24:08.127657 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 00:24:08.127668 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 00:24:08.127679 | orchestrator | 2026-03-05 00:24:08.127689 | orchestrator | 2026-03-05 00:24:08.127700 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:24:08.127711 | orchestrator | Thursday 05 March 2026 00:24:07 +0000 (0:00:00.249) 0:00:12.940 ******** 2026-03-05 00:24:08.127753 | orchestrator | =============================================================================== 2026-03-05 00:24:08.127764 | orchestrator | Gathering Facts --------------------------------------------------------- 3.21s 2026-03-05 00:24:08.127775 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.22s 2026-03-05 00:24:08.127786 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.21s 2026-03-05 00:24:08.127821 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.18s 2026-03-05 00:24:08.127832 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.86s 2026-03-05 00:24:08.127843 | orchestrator | Do not require tty for all users ---------------------------------------- 0.80s 2026-03-05 00:24:08.127853 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2026-03-05 00:24:08.127864 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.66s 2026-03-05 00:24:08.127874 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.61s 2026-03-05 00:24:08.127885 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.57s 2026-03-05 00:24:08.127896 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.25s 2026-03-05 00:24:08.127907 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.22s 2026-03-05 00:24:08.127918 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.21s 2026-03-05 00:24:08.127928 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2026-03-05 00:24:08.127939 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.20s 2026-03-05 00:24:08.127949 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.19s 2026-03-05 00:24:08.127960 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2026-03-05 00:24:08.127971 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2026-03-05 00:24:08.127981 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2026-03-05 00:24:08.426830 | orchestrator | + osism apply --environment custom facts 2026-03-05 00:24:10.425229 | orchestrator | 2026-03-05 00:24:10 | INFO  | Trying to run play facts in environment custom 2026-03-05 00:24:20.437184 | orchestrator | 2026-03-05 00:24:20 | INFO  | Prepare task for execution of facts. 2026-03-05 00:24:20.516640 | orchestrator | 2026-03-05 00:24:20 | INFO  | Task 4f81625d-d74f-45d6-b179-b288bcad7dc8 (facts) was prepared for execution. 2026-03-05 00:24:20.516795 | orchestrator | 2026-03-05 00:24:20 | INFO  | It takes a moment until task 4f81625d-d74f-45d6-b179-b288bcad7dc8 (facts) has been started and output is visible here. 2026-03-05 00:25:01.820382 | orchestrator | 2026-03-05 00:25:01.820467 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-05 00:25:01.820474 | orchestrator | 2026-03-05 00:25:01.820479 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-05 00:25:01.820484 | orchestrator | Thursday 05 March 2026 00:24:24 +0000 (0:00:00.070) 0:00:00.070 ******** 2026-03-05 00:25:01.820488 | orchestrator | ok: [testbed-manager] 2026-03-05 00:25:01.820493 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:25:01.820498 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:25:01.820502 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:25:01.820506 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:25:01.820509 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:25:01.820513 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:25:01.820517 | orchestrator | 2026-03-05 00:25:01.820521 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-05 00:25:01.820525 | orchestrator | Thursday 05 March 2026 00:24:26 +0000 (0:00:01.354) 0:00:01.425 ******** 2026-03-05 00:25:01.820528 | orchestrator | ok: [testbed-manager] 2026-03-05 00:25:01.820532 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:25:01.820536 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:25:01.820540 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:25:01.820546 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:25:01.820566 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:25:01.820570 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:25:01.820588 | orchestrator | 2026-03-05 00:25:01.820592 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-05 00:25:01.820595 | orchestrator | 2026-03-05 00:25:01.820599 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-05 00:25:01.820603 | orchestrator | Thursday 05 March 2026 00:24:27 +0000 (0:00:01.179) 0:00:02.604 ******** 2026-03-05 00:25:01.820607 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:25:01.820611 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:25:01.820614 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:25:01.820618 | orchestrator | 2026-03-05 00:25:01.820622 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-05 00:25:01.820626 | orchestrator | Thursday 05 March 2026 00:24:27 +0000 (0:00:00.123) 0:00:02.728 ******** 2026-03-05 00:25:01.820630 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:25:01.820634 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:25:01.820637 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:25:01.820641 | orchestrator | 2026-03-05 00:25:01.820645 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-05 00:25:01.820648 | orchestrator | Thursday 05 March 2026 00:24:27 +0000 (0:00:00.228) 0:00:02.956 ******** 2026-03-05 00:25:01.820652 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:25:01.820656 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:25:01.820659 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:25:01.820663 | orchestrator | 2026-03-05 00:25:01.820667 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-05 00:25:01.820671 | orchestrator | Thursday 05 March 2026 00:24:27 +0000 (0:00:00.221) 0:00:03.178 ******** 2026-03-05 00:25:01.820675 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:25:01.820681 | orchestrator | 2026-03-05 00:25:01.820684 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-05 00:25:01.820688 | orchestrator | Thursday 05 March 2026 00:24:27 +0000 (0:00:00.127) 0:00:03.305 ******** 2026-03-05 00:25:01.820692 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:25:01.820695 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:25:01.820699 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:25:01.820703 | orchestrator | 2026-03-05 00:25:01.820707 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-05 00:25:01.820710 | orchestrator | Thursday 05 March 2026 00:24:28 +0000 (0:00:00.462) 0:00:03.767 ******** 2026-03-05 00:25:01.820714 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:25:01.820832 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:25:01.820839 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:25:01.820843 | orchestrator | 2026-03-05 00:25:01.820847 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-05 00:25:01.820851 | orchestrator | Thursday 05 March 2026 00:24:28 +0000 (0:00:00.133) 0:00:03.901 ******** 2026-03-05 00:25:01.820854 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:25:01.820858 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:25:01.820862 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:25:01.820865 | orchestrator | 2026-03-05 00:25:01.820869 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-05 00:25:01.820873 | orchestrator | Thursday 05 March 2026 00:24:29 +0000 (0:00:01.025) 0:00:04.926 ******** 2026-03-05 00:25:01.820877 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:25:01.820880 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:25:01.820884 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:25:01.820888 | orchestrator | 2026-03-05 00:25:01.820892 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-05 00:25:01.820896 | orchestrator | Thursday 05 March 2026 00:24:30 +0000 (0:00:00.448) 0:00:05.375 ******** 2026-03-05 00:25:01.820899 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:25:01.820903 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:25:01.820907 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:25:01.820916 | orchestrator | 2026-03-05 00:25:01.820920 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-05 00:25:01.820924 | orchestrator | Thursday 05 March 2026 00:24:31 +0000 (0:00:01.063) 0:00:06.439 ******** 2026-03-05 00:25:01.820927 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:25:01.820931 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:25:01.820935 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:25:01.820938 | orchestrator | 2026-03-05 00:25:01.820942 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-05 00:25:01.820947 | orchestrator | Thursday 05 March 2026 00:24:45 +0000 (0:00:14.577) 0:00:21.016 ******** 2026-03-05 00:25:01.820952 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:25:01.820956 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:25:01.820960 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:25:01.820965 | orchestrator | 2026-03-05 00:25:01.820969 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-05 00:25:01.820986 | orchestrator | Thursday 05 March 2026 00:24:45 +0000 (0:00:00.097) 0:00:21.114 ******** 2026-03-05 00:25:01.820991 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:25:01.820995 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:25:01.821000 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:25:01.821004 | orchestrator | 2026-03-05 00:25:01.821009 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-05 00:25:01.821014 | orchestrator | Thursday 05 March 2026 00:24:52 +0000 (0:00:07.111) 0:00:28.225 ******** 2026-03-05 00:25:01.821018 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:25:01.821022 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:25:01.821026 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:25:01.821031 | orchestrator | 2026-03-05 00:25:01.821035 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-05 00:25:01.821040 | orchestrator | Thursday 05 March 2026 00:24:53 +0000 (0:00:00.462) 0:00:28.688 ******** 2026-03-05 00:25:01.821045 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-05 00:25:01.821050 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-05 00:25:01.821054 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-05 00:25:01.821059 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-05 00:25:01.821063 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-05 00:25:01.821068 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-05 00:25:01.821073 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-05 00:25:01.821077 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-05 00:25:01.821082 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-05 00:25:01.821087 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-05 00:25:01.821091 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-05 00:25:01.821096 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-05 00:25:01.821101 | orchestrator | 2026-03-05 00:25:01.821105 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-05 00:25:01.821109 | orchestrator | Thursday 05 March 2026 00:24:56 +0000 (0:00:03.483) 0:00:32.172 ******** 2026-03-05 00:25:01.821114 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:25:01.821118 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:25:01.821123 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:25:01.821127 | orchestrator | 2026-03-05 00:25:01.821132 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-05 00:25:01.821136 | orchestrator | 2026-03-05 00:25:01.821141 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-05 00:25:01.821145 | orchestrator | Thursday 05 March 2026 00:24:58 +0000 (0:00:01.350) 0:00:33.522 ******** 2026-03-05 00:25:01.821153 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:25:01.821158 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:25:01.821162 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:25:01.821178 | orchestrator | ok: [testbed-manager] 2026-03-05 00:25:01.821182 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:25:01.821213 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:25:01.821218 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:25:01.821223 | orchestrator | 2026-03-05 00:25:01.821228 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:25:01.821233 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:25:01.821238 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:25:01.821243 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:25:01.821247 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:25:01.821252 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:25:01.821257 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:25:01.821261 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:25:01.821266 | orchestrator | 2026-03-05 00:25:01.821270 | orchestrator | 2026-03-05 00:25:01.821275 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:25:01.821279 | orchestrator | Thursday 05 March 2026 00:25:01 +0000 (0:00:03.613) 0:00:37.136 ******** 2026-03-05 00:25:01.821284 | orchestrator | =============================================================================== 2026-03-05 00:25:01.821288 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.58s 2026-03-05 00:25:01.821293 | orchestrator | Install required packages (Debian) -------------------------------------- 7.11s 2026-03-05 00:25:01.821297 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.61s 2026-03-05 00:25:01.821302 | orchestrator | Copy fact files --------------------------------------------------------- 3.48s 2026-03-05 00:25:01.821306 | orchestrator | Create custom facts directory ------------------------------------------- 1.35s 2026-03-05 00:25:01.821310 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.35s 2026-03-05 00:25:01.821316 | orchestrator | Copy fact file ---------------------------------------------------------- 1.18s 2026-03-05 00:25:02.027330 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.06s 2026-03-05 00:25:02.027429 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2026-03-05 00:25:02.027442 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.46s 2026-03-05 00:25:02.027452 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2026-03-05 00:25:02.027462 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2026-03-05 00:25:02.027472 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.23s 2026-03-05 00:25:02.027482 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2026-03-05 00:25:02.027491 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2026-03-05 00:25:02.027501 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2026-03-05 00:25:02.027530 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2026-03-05 00:25:02.027561 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-03-05 00:25:02.442449 | orchestrator | + osism apply bootstrap 2026-03-05 00:25:14.544423 | orchestrator | 2026-03-05 00:25:14 | INFO  | Prepare task for execution of bootstrap. 2026-03-05 00:25:14.627402 | orchestrator | 2026-03-05 00:25:14 | INFO  | Task cef7f0d3-2cea-4bcb-a3e1-ea74a7f4ffab (bootstrap) was prepared for execution. 2026-03-05 00:25:14.627500 | orchestrator | 2026-03-05 00:25:14 | INFO  | It takes a moment until task cef7f0d3-2cea-4bcb-a3e1-ea74a7f4ffab (bootstrap) has been started and output is visible here. 2026-03-05 00:25:30.294307 | orchestrator | 2026-03-05 00:25:30.294460 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-05 00:25:30.294479 | orchestrator | 2026-03-05 00:25:30.294490 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-05 00:25:30.294501 | orchestrator | Thursday 05 March 2026 00:25:19 +0000 (0:00:00.144) 0:00:00.144 ******** 2026-03-05 00:25:30.294511 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:25:30.294521 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:25:30.294531 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:25:30.294541 | orchestrator | ok: [testbed-manager] 2026-03-05 00:25:30.294550 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:25:30.294560 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:25:30.294570 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:25:30.294579 | orchestrator | 2026-03-05 00:25:30.294589 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-05 00:25:30.294599 | orchestrator | 2026-03-05 00:25:30.294609 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-05 00:25:30.294618 | orchestrator | Thursday 05 March 2026 00:25:19 +0000 (0:00:00.261) 0:00:00.406 ******** 2026-03-05 00:25:30.294629 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:25:30.294639 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:25:30.294648 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:25:30.294658 | orchestrator | ok: [testbed-manager] 2026-03-05 00:25:30.294667 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:25:30.294677 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:25:30.294686 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:25:30.294696 | orchestrator | 2026-03-05 00:25:30.294706 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-05 00:25:30.294785 | orchestrator | 2026-03-05 00:25:30.294802 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-05 00:25:30.294819 | orchestrator | Thursday 05 March 2026 00:25:22 +0000 (0:00:03.380) 0:00:03.786 ******** 2026-03-05 00:25:30.294836 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 00:25:30.294855 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 00:25:30.294872 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-05 00:25:30.294889 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 00:25:30.294905 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-05 00:25:30.294915 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-05 00:25:30.294925 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-05 00:25:30.294935 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-05 00:25:30.294945 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-05 00:25:30.294955 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-05 00:25:30.294964 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-05 00:25:30.294974 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-05 00:25:30.294984 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-05 00:25:30.294994 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-05 00:25:30.295004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-05 00:25:30.295032 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-05 00:25:30.295042 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-05 00:25:30.295052 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-05 00:25:30.295061 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-05 00:25:30.295071 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-05 00:25:30.295080 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-05 00:25:30.295090 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-05 00:25:30.295114 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:25:30.295124 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-05 00:25:30.295133 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:25:30.295153 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-05 00:25:30.295163 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-05 00:25:30.295172 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-05 00:25:30.295182 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-05 00:25:30.295191 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-05 00:25:30.295201 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:25:30.295210 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-05 00:25:30.295220 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-05 00:25:30.295230 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-05 00:25:30.295239 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-05 00:25:30.295257 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-05 00:25:30.295271 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-05 00:25:30.295298 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-05 00:25:30.295314 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-05 00:25:30.295329 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-05 00:25:30.295345 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:25:30.295361 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-05 00:25:30.295376 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-05 00:25:30.295391 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-05 00:25:30.295406 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-05 00:25:30.295421 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-05 00:25:30.295438 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:25:30.295477 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-05 00:25:30.295493 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-05 00:25:30.295509 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-05 00:25:30.295525 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-05 00:25:30.295541 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-05 00:25:30.295558 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:25:30.295575 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-05 00:25:30.295592 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-05 00:25:30.295609 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:25:30.295626 | orchestrator | 2026-03-05 00:25:30.295642 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-05 00:25:30.295656 | orchestrator | 2026-03-05 00:25:30.295667 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-05 00:25:30.295677 | orchestrator | Thursday 05 March 2026 00:25:23 +0000 (0:00:00.464) 0:00:04.251 ******** 2026-03-05 00:25:30.295686 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:25:30.295737 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:25:30.295749 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:25:30.295759 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:25:30.295768 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:25:30.295778 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:25:30.295787 | orchestrator | ok: [testbed-manager] 2026-03-05 00:25:30.295797 | orchestrator | 2026-03-05 00:25:30.295806 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-05 00:25:30.295816 | orchestrator | Thursday 05 March 2026 00:25:24 +0000 (0:00:01.174) 0:00:05.426 ******** 2026-03-05 00:25:30.295825 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:25:30.295835 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:25:30.295845 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:25:30.295854 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:25:30.295864 | orchestrator | ok: [testbed-manager] 2026-03-05 00:25:30.295873 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:25:30.295887 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:25:30.295906 | orchestrator | 2026-03-05 00:25:30.295931 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-05 00:25:30.295946 | orchestrator | Thursday 05 March 2026 00:25:25 +0000 (0:00:01.158) 0:00:06.584 ******** 2026-03-05 00:25:30.295963 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:25:30.295980 | orchestrator | 2026-03-05 00:25:30.295996 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-05 00:25:30.296011 | orchestrator | Thursday 05 March 2026 00:25:25 +0000 (0:00:00.280) 0:00:06.864 ******** 2026-03-05 00:25:30.296024 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:25:30.296038 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:25:30.296051 | orchestrator | changed: [testbed-manager] 2026-03-05 00:25:30.296067 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:25:30.296082 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:25:30.296097 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:25:30.296112 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:25:30.296128 | orchestrator | 2026-03-05 00:25:30.296143 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-05 00:25:30.296161 | orchestrator | Thursday 05 March 2026 00:25:27 +0000 (0:00:02.012) 0:00:08.876 ******** 2026-03-05 00:25:30.296177 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:25:30.296196 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:25:30.296210 | orchestrator | 2026-03-05 00:25:30.296220 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-05 00:25:30.296229 | orchestrator | Thursday 05 March 2026 00:25:28 +0000 (0:00:00.271) 0:00:09.148 ******** 2026-03-05 00:25:30.296240 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:25:30.296256 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:25:30.296281 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:25:30.296299 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:25:30.296314 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:25:30.296347 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:25:30.296364 | orchestrator | 2026-03-05 00:25:30.296380 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-05 00:25:30.296398 | orchestrator | Thursday 05 March 2026 00:25:29 +0000 (0:00:00.994) 0:00:10.142 ******** 2026-03-05 00:25:30.296415 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:25:30.296431 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:25:30.296443 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:25:30.296453 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:25:30.296462 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:25:30.296472 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:25:30.296491 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:25:30.296501 | orchestrator | 2026-03-05 00:25:30.296511 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-05 00:25:30.296525 | orchestrator | Thursday 05 March 2026 00:25:29 +0000 (0:00:00.554) 0:00:10.697 ******** 2026-03-05 00:25:30.296535 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:25:30.296545 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:25:30.296554 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:25:30.296564 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:25:30.296573 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:25:30.296583 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:25:30.296593 | orchestrator | ok: [testbed-manager] 2026-03-05 00:25:30.296602 | orchestrator | 2026-03-05 00:25:30.296612 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-05 00:25:30.296623 | orchestrator | Thursday 05 March 2026 00:25:30 +0000 (0:00:00.573) 0:00:11.271 ******** 2026-03-05 00:25:30.296633 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:25:30.296642 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:25:30.296664 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:25:42.295225 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:25:42.295340 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:25:42.295355 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:25:42.295366 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:25:42.295377 | orchestrator | 2026-03-05 00:25:42.295390 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-05 00:25:42.295402 | orchestrator | Thursday 05 March 2026 00:25:30 +0000 (0:00:00.216) 0:00:11.487 ******** 2026-03-05 00:25:42.295415 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:25:42.295503 | orchestrator | 2026-03-05 00:25:42.295524 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-05 00:25:42.295543 | orchestrator | Thursday 05 March 2026 00:25:30 +0000 (0:00:00.290) 0:00:11.778 ******** 2026-03-05 00:25:42.295562 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:25:42.295579 | orchestrator | 2026-03-05 00:25:42.295597 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-05 00:25:42.295614 | orchestrator | Thursday 05 March 2026 00:25:31 +0000 (0:00:00.435) 0:00:12.214 ******** 2026-03-05 00:25:42.295632 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:25:42.295649 | orchestrator | ok: [testbed-manager] 2026-03-05 00:25:42.295667 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:25:42.295783 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:25:42.295807 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:25:42.295825 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:25:42.295844 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:25:42.295864 | orchestrator | 2026-03-05 00:25:42.295884 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-05 00:25:42.295906 | orchestrator | Thursday 05 March 2026 00:25:32 +0000 (0:00:01.291) 0:00:13.506 ******** 2026-03-05 00:25:42.295925 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:25:42.295945 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:25:42.295965 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:25:42.295983 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:25:42.296002 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:25:42.296022 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:25:42.296041 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:25:42.296060 | orchestrator | 2026-03-05 00:25:42.296081 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-05 00:25:42.296131 | orchestrator | Thursday 05 March 2026 00:25:32 +0000 (0:00:00.242) 0:00:13.748 ******** 2026-03-05 00:25:42.296145 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:25:42.296159 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:25:42.296170 | orchestrator | ok: [testbed-manager] 2026-03-05 00:25:42.296180 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:25:42.296191 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:25:42.296201 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:25:42.296212 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:25:42.296223 | orchestrator | 2026-03-05 00:25:42.296233 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-05 00:25:42.296244 | orchestrator | Thursday 05 March 2026 00:25:33 +0000 (0:00:00.592) 0:00:14.340 ******** 2026-03-05 00:25:42.296255 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:25:42.296265 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:25:42.296276 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:25:42.296287 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:25:42.296297 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:25:42.296308 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:25:42.296318 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:25:42.296329 | orchestrator | 2026-03-05 00:25:42.296340 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-05 00:25:42.296352 | orchestrator | Thursday 05 March 2026 00:25:33 +0000 (0:00:00.254) 0:00:14.595 ******** 2026-03-05 00:25:42.296362 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:25:42.296373 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:25:42.296383 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:25:42.296394 | orchestrator | ok: [testbed-manager] 2026-03-05 00:25:42.296404 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:25:42.296415 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:25:42.296425 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:25:42.296436 | orchestrator | 2026-03-05 00:25:42.296447 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-05 00:25:42.296457 | orchestrator | Thursday 05 March 2026 00:25:34 +0000 (0:00:00.556) 0:00:15.151 ******** 2026-03-05 00:25:42.296468 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:25:42.296478 | orchestrator | ok: [testbed-manager] 2026-03-05 00:25:42.296489 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:25:42.296500 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:25:42.296510 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:25:42.296521 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:25:42.296531 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:25:42.296542 | orchestrator | 2026-03-05 00:25:42.296563 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-05 00:25:42.296574 | orchestrator | Thursday 05 March 2026 00:25:35 +0000 (0:00:01.157) 0:00:16.309 ******** 2026-03-05 00:25:42.296585 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:25:42.296596 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:25:42.296606 | orchestrator | ok: [testbed-manager] 2026-03-05 00:25:42.296617 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:25:42.296628 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:25:42.296638 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:25:42.296649 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:25:42.296659 | orchestrator | 2026-03-05 00:25:42.296670 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-05 00:25:42.296681 | orchestrator | Thursday 05 March 2026 00:25:36 +0000 (0:00:01.114) 0:00:17.423 ******** 2026-03-05 00:25:42.296742 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:25:42.296756 | orchestrator | 2026-03-05 00:25:42.296767 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-05 00:25:42.296787 | orchestrator | Thursday 05 March 2026 00:25:36 +0000 (0:00:00.336) 0:00:17.760 ******** 2026-03-05 00:25:42.296797 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:25:42.296808 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:25:42.296819 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:25:42.296829 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:25:42.296840 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:25:42.296850 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:25:42.296861 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:25:42.296871 | orchestrator | 2026-03-05 00:25:42.296882 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-05 00:25:42.296893 | orchestrator | Thursday 05 March 2026 00:25:37 +0000 (0:00:01.281) 0:00:19.042 ******** 2026-03-05 00:25:42.296903 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:25:42.296914 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:25:42.296925 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:25:42.296935 | orchestrator | ok: [testbed-manager] 2026-03-05 00:25:42.296945 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:25:42.296956 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:25:42.296973 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:25:42.296991 | orchestrator | 2026-03-05 00:25:42.297010 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-05 00:25:42.297026 | orchestrator | Thursday 05 March 2026 00:25:38 +0000 (0:00:00.234) 0:00:19.277 ******** 2026-03-05 00:25:42.297037 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:25:42.297047 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:25:42.297058 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:25:42.297069 | orchestrator | ok: [testbed-manager] 2026-03-05 00:25:42.297079 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:25:42.297089 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:25:42.297100 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:25:42.297110 | orchestrator | 2026-03-05 00:25:42.297121 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-05 00:25:42.297132 | orchestrator | Thursday 05 March 2026 00:25:38 +0000 (0:00:00.236) 0:00:19.513 ******** 2026-03-05 00:25:42.297142 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:25:42.297153 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:25:42.297164 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:25:42.297195 | orchestrator | ok: [testbed-manager] 2026-03-05 00:25:42.297206 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:25:42.297216 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:25:42.297227 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:25:42.297237 | orchestrator | 2026-03-05 00:25:42.297248 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-05 00:25:42.297259 | orchestrator | Thursday 05 March 2026 00:25:38 +0000 (0:00:00.233) 0:00:19.746 ******** 2026-03-05 00:25:42.297270 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:25:42.297283 | orchestrator | 2026-03-05 00:25:42.297294 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-05 00:25:42.297304 | orchestrator | Thursday 05 March 2026 00:25:38 +0000 (0:00:00.287) 0:00:20.033 ******** 2026-03-05 00:25:42.297315 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:25:42.297325 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:25:42.297336 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:25:42.297347 | orchestrator | ok: [testbed-manager] 2026-03-05 00:25:42.297357 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:25:42.297368 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:25:42.297378 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:25:42.297389 | orchestrator | 2026-03-05 00:25:42.297400 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-05 00:25:42.297410 | orchestrator | Thursday 05 March 2026 00:25:39 +0000 (0:00:00.519) 0:00:20.553 ******** 2026-03-05 00:25:42.297421 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:25:42.297440 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:25:42.297451 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:25:42.297462 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:25:42.297472 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:25:42.297483 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:25:42.297493 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:25:42.297504 | orchestrator | 2026-03-05 00:25:42.297515 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-05 00:25:42.297526 | orchestrator | Thursday 05 March 2026 00:25:39 +0000 (0:00:00.221) 0:00:20.775 ******** 2026-03-05 00:25:42.297536 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:25:42.297547 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:25:42.297557 | orchestrator | ok: [testbed-manager] 2026-03-05 00:25:42.297568 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:25:42.297579 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:25:42.297589 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:25:42.297600 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:25:42.297611 | orchestrator | 2026-03-05 00:25:42.297621 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-05 00:25:42.297632 | orchestrator | Thursday 05 March 2026 00:25:40 +0000 (0:00:01.028) 0:00:21.803 ******** 2026-03-05 00:25:42.297643 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:25:42.297654 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:25:42.297665 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:25:42.297675 | orchestrator | ok: [testbed-manager] 2026-03-05 00:25:42.297686 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:25:42.297696 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:25:42.297770 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:25:42.297790 | orchestrator | 2026-03-05 00:25:42.297801 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-05 00:25:42.297812 | orchestrator | Thursday 05 March 2026 00:25:41 +0000 (0:00:00.533) 0:00:22.336 ******** 2026-03-05 00:25:42.297823 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:25:42.297834 | orchestrator | ok: [testbed-manager] 2026-03-05 00:25:42.297845 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:25:42.297855 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:25:42.297875 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:26:23.359002 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:26:23.359120 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:26:23.359137 | orchestrator | 2026-03-05 00:26:23.359150 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-05 00:26:23.359180 | orchestrator | Thursday 05 March 2026 00:25:42 +0000 (0:00:01.094) 0:00:23.431 ******** 2026-03-05 00:26:23.359203 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:26:23.359216 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:26:23.359227 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:26:23.359238 | orchestrator | changed: [testbed-manager] 2026-03-05 00:26:23.359249 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:26:23.359259 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:26:23.359270 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:26:23.359281 | orchestrator | 2026-03-05 00:26:23.359293 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-05 00:26:23.359318 | orchestrator | Thursday 05 March 2026 00:25:58 +0000 (0:00:15.817) 0:00:39.248 ******** 2026-03-05 00:26:23.359340 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:26:23.359351 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:26:23.359362 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:26:23.359373 | orchestrator | ok: [testbed-manager] 2026-03-05 00:26:23.359384 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:26:23.359394 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:26:23.359405 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:26:23.359416 | orchestrator | 2026-03-05 00:26:23.359427 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-05 00:26:23.359438 | orchestrator | Thursday 05 March 2026 00:25:58 +0000 (0:00:00.252) 0:00:39.501 ******** 2026-03-05 00:26:23.359481 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:26:23.359501 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:26:23.359519 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:26:23.359536 | orchestrator | ok: [testbed-manager] 2026-03-05 00:26:23.359554 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:26:23.359571 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:26:23.359589 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:26:23.359605 | orchestrator | 2026-03-05 00:26:23.359622 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-05 00:26:23.359639 | orchestrator | Thursday 05 March 2026 00:25:58 +0000 (0:00:00.227) 0:00:39.729 ******** 2026-03-05 00:26:23.359656 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:26:23.359744 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:26:23.359765 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:26:23.359785 | orchestrator | ok: [testbed-manager] 2026-03-05 00:26:23.359803 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:26:23.359822 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:26:23.359840 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:26:23.359860 | orchestrator | 2026-03-05 00:26:23.359879 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-05 00:26:23.359898 | orchestrator | Thursday 05 March 2026 00:25:58 +0000 (0:00:00.227) 0:00:39.957 ******** 2026-03-05 00:26:23.359911 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:26:23.359926 | orchestrator | 2026-03-05 00:26:23.359937 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-05 00:26:23.359948 | orchestrator | Thursday 05 March 2026 00:25:59 +0000 (0:00:00.311) 0:00:40.268 ******** 2026-03-05 00:26:23.359959 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:26:23.359970 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:26:23.359980 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:26:23.359991 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:26:23.360022 | orchestrator | ok: [testbed-manager] 2026-03-05 00:26:23.360033 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:26:23.360044 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:26:23.360055 | orchestrator | 2026-03-05 00:26:23.360066 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-05 00:26:23.360077 | orchestrator | Thursday 05 March 2026 00:26:00 +0000 (0:00:01.622) 0:00:41.891 ******** 2026-03-05 00:26:23.360088 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:26:23.360099 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:26:23.360110 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:26:23.360121 | orchestrator | changed: [testbed-manager] 2026-03-05 00:26:23.360131 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:26:23.360142 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:26:23.360152 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:26:23.360164 | orchestrator | 2026-03-05 00:26:23.360174 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-05 00:26:23.360185 | orchestrator | Thursday 05 March 2026 00:26:01 +0000 (0:00:01.036) 0:00:42.927 ******** 2026-03-05 00:26:23.360196 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:26:23.360207 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:26:23.360218 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:26:23.360229 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:26:23.360239 | orchestrator | ok: [testbed-manager] 2026-03-05 00:26:23.360250 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:26:23.360261 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:26:23.360271 | orchestrator | 2026-03-05 00:26:23.360282 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-05 00:26:23.360293 | orchestrator | Thursday 05 March 2026 00:26:02 +0000 (0:00:00.795) 0:00:43.723 ******** 2026-03-05 00:26:23.360309 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:26:23.360335 | orchestrator | 2026-03-05 00:26:23.360346 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-05 00:26:23.360358 | orchestrator | Thursday 05 March 2026 00:26:03 +0000 (0:00:00.380) 0:00:44.104 ******** 2026-03-05 00:26:23.360369 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:26:23.360379 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:26:23.360390 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:26:23.360401 | orchestrator | changed: [testbed-manager] 2026-03-05 00:26:23.360412 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:26:23.360423 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:26:23.360434 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:26:23.360444 | orchestrator | 2026-03-05 00:26:23.360475 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-05 00:26:23.360486 | orchestrator | Thursday 05 March 2026 00:26:04 +0000 (0:00:01.077) 0:00:45.182 ******** 2026-03-05 00:26:23.360497 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:26:23.360508 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:26:23.360518 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:26:23.360529 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:26:23.360540 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:26:23.360551 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:26:23.360561 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:26:23.360572 | orchestrator | 2026-03-05 00:26:23.360583 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-05 00:26:23.360594 | orchestrator | Thursday 05 March 2026 00:26:04 +0000 (0:00:00.259) 0:00:45.441 ******** 2026-03-05 00:26:23.360605 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:26:23.360617 | orchestrator | 2026-03-05 00:26:23.360627 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-05 00:26:23.360638 | orchestrator | Thursday 05 March 2026 00:26:04 +0000 (0:00:00.355) 0:00:45.796 ******** 2026-03-05 00:26:23.360649 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:26:23.360660 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:26:23.360698 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:26:23.360715 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:26:23.360726 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:26:23.360737 | orchestrator | ok: [testbed-manager] 2026-03-05 00:26:23.360747 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:26:23.360758 | orchestrator | 2026-03-05 00:26:23.360769 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-05 00:26:23.360780 | orchestrator | Thursday 05 March 2026 00:26:06 +0000 (0:00:01.569) 0:00:47.366 ******** 2026-03-05 00:26:23.360791 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:26:23.360802 | orchestrator | changed: [testbed-manager] 2026-03-05 00:26:23.360813 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:26:23.360823 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:26:23.360834 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:26:23.360845 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:26:23.360856 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:26:23.360867 | orchestrator | 2026-03-05 00:26:23.360877 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-05 00:26:23.360888 | orchestrator | Thursday 05 March 2026 00:26:07 +0000 (0:00:01.153) 0:00:48.519 ******** 2026-03-05 00:26:23.360899 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:26:23.360910 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:26:23.360921 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:26:23.360931 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:26:23.360942 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:26:23.360953 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:26:23.360973 | orchestrator | changed: [testbed-manager] 2026-03-05 00:26:23.360984 | orchestrator | 2026-03-05 00:26:23.360995 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-05 00:26:23.361006 | orchestrator | Thursday 05 March 2026 00:26:20 +0000 (0:00:12.727) 0:01:01.247 ******** 2026-03-05 00:26:23.361017 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:26:23.361027 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:26:23.361038 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:26:23.361049 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:26:23.361060 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:26:23.361070 | orchestrator | ok: [testbed-manager] 2026-03-05 00:26:23.361081 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:26:23.361092 | orchestrator | 2026-03-05 00:26:23.361103 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-05 00:26:23.361114 | orchestrator | Thursday 05 March 2026 00:26:21 +0000 (0:00:01.509) 0:01:02.757 ******** 2026-03-05 00:26:23.361124 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:26:23.361135 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:26:23.361146 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:26:23.361156 | orchestrator | ok: [testbed-manager] 2026-03-05 00:26:23.361167 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:26:23.361178 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:26:23.361188 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:26:23.361199 | orchestrator | 2026-03-05 00:26:23.361210 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-05 00:26:23.361221 | orchestrator | Thursday 05 March 2026 00:26:22 +0000 (0:00:00.908) 0:01:03.665 ******** 2026-03-05 00:26:23.361231 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:26:23.361242 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:26:23.361253 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:26:23.361263 | orchestrator | ok: [testbed-manager] 2026-03-05 00:26:23.361274 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:26:23.361285 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:26:23.361295 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:26:23.361306 | orchestrator | 2026-03-05 00:26:23.361317 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-05 00:26:23.361328 | orchestrator | Thursday 05 March 2026 00:26:22 +0000 (0:00:00.252) 0:01:03.918 ******** 2026-03-05 00:26:23.361339 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:26:23.361350 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:26:23.361360 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:26:23.361376 | orchestrator | ok: [testbed-manager] 2026-03-05 00:26:23.361387 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:26:23.361398 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:26:23.361408 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:26:23.361419 | orchestrator | 2026-03-05 00:26:23.361430 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-05 00:26:23.361441 | orchestrator | Thursday 05 March 2026 00:26:23 +0000 (0:00:00.235) 0:01:04.153 ******** 2026-03-05 00:26:23.361453 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:26:23.361464 | orchestrator | 2026-03-05 00:26:23.361483 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-05 00:28:47.970742 | orchestrator | Thursday 05 March 2026 00:26:23 +0000 (0:00:00.290) 0:01:04.444 ******** 2026-03-05 00:28:47.970864 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:28:47.970881 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:28:47.970893 | orchestrator | ok: [testbed-manager] 2026-03-05 00:28:47.970904 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:28:47.970915 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:28:47.970926 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:28:47.970937 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:28:47.970948 | orchestrator | 2026-03-05 00:28:47.970959 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-05 00:28:47.970996 | orchestrator | Thursday 05 March 2026 00:26:24 +0000 (0:00:01.537) 0:01:05.982 ******** 2026-03-05 00:28:47.971008 | orchestrator | changed: [testbed-manager] 2026-03-05 00:28:47.971020 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:28:47.971031 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:28:47.971042 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:28:47.971052 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:28:47.971063 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:28:47.971074 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:28:47.971084 | orchestrator | 2026-03-05 00:28:47.971096 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-05 00:28:47.971108 | orchestrator | Thursday 05 March 2026 00:26:25 +0000 (0:00:00.529) 0:01:06.512 ******** 2026-03-05 00:28:47.971118 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:28:47.971129 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:28:47.971140 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:28:47.971150 | orchestrator | ok: [testbed-manager] 2026-03-05 00:28:47.971161 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:28:47.971172 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:28:47.971182 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:28:47.971193 | orchestrator | 2026-03-05 00:28:47.971204 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-05 00:28:47.971215 | orchestrator | Thursday 05 March 2026 00:26:25 +0000 (0:00:00.220) 0:01:06.733 ******** 2026-03-05 00:28:47.971225 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:28:47.971236 | orchestrator | ok: [testbed-manager] 2026-03-05 00:28:47.971247 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:28:47.971257 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:28:47.971268 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:28:47.971280 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:28:47.971293 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:28:47.971305 | orchestrator | 2026-03-05 00:28:47.971318 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-05 00:28:47.971331 | orchestrator | Thursday 05 March 2026 00:26:26 +0000 (0:00:01.143) 0:01:07.877 ******** 2026-03-05 00:28:47.971344 | orchestrator | changed: [testbed-manager] 2026-03-05 00:28:47.971356 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:28:47.971369 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:28:47.971381 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:28:47.971394 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:28:47.971407 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:28:47.971419 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:28:47.971432 | orchestrator | 2026-03-05 00:28:47.971444 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-05 00:28:47.971458 | orchestrator | Thursday 05 March 2026 00:26:28 +0000 (0:00:01.939) 0:01:09.816 ******** 2026-03-05 00:28:47.971470 | orchestrator | ok: [testbed-manager] 2026-03-05 00:28:47.971483 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:28:47.971557 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:28:47.971572 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:28:47.971584 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:28:47.971597 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:28:47.971609 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:28:47.971621 | orchestrator | 2026-03-05 00:28:47.971634 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-05 00:28:47.971645 | orchestrator | Thursday 05 March 2026 00:26:31 +0000 (0:00:03.274) 0:01:13.090 ******** 2026-03-05 00:28:47.971656 | orchestrator | ok: [testbed-manager] 2026-03-05 00:28:47.971666 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:28:47.971677 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:28:47.971688 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:28:47.971698 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:28:47.971709 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:28:47.971722 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:28:47.971753 | orchestrator | 2026-03-05 00:28:47.971773 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-05 00:28:47.971790 | orchestrator | Thursday 05 March 2026 00:27:07 +0000 (0:00:35.898) 0:01:48.988 ******** 2026-03-05 00:28:47.971808 | orchestrator | changed: [testbed-manager] 2026-03-05 00:28:47.971826 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:28:47.971844 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:28:47.971862 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:28:47.971880 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:28:47.971898 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:28:47.971917 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:28:47.971928 | orchestrator | 2026-03-05 00:28:47.971940 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-05 00:28:47.971950 | orchestrator | Thursday 05 March 2026 00:28:32 +0000 (0:01:24.210) 0:03:13.199 ******** 2026-03-05 00:28:47.971961 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:28:47.971972 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:28:47.971982 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:28:47.971993 | orchestrator | ok: [testbed-manager] 2026-03-05 00:28:47.972004 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:28:47.972015 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:28:47.972026 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:28:47.972037 | orchestrator | 2026-03-05 00:28:47.972048 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-05 00:28:47.972059 | orchestrator | Thursday 05 March 2026 00:28:33 +0000 (0:00:01.869) 0:03:15.068 ******** 2026-03-05 00:28:47.972069 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:28:47.972080 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:28:47.972091 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:28:47.972101 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:28:47.972112 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:28:47.972122 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:28:47.972133 | orchestrator | changed: [testbed-manager] 2026-03-05 00:28:47.972144 | orchestrator | 2026-03-05 00:28:47.972155 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-05 00:28:47.972166 | orchestrator | Thursday 05 March 2026 00:28:46 +0000 (0:00:12.782) 0:03:27.851 ******** 2026-03-05 00:28:47.972262 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-05 00:28:47.972301 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-05 00:28:47.972318 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-05 00:28:47.972330 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-05 00:28:47.972353 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-05 00:28:47.972368 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-05 00:28:47.972380 | orchestrator | 2026-03-05 00:28:47.972391 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-05 00:28:47.972403 | orchestrator | Thursday 05 March 2026 00:28:47 +0000 (0:00:00.401) 0:03:28.252 ******** 2026-03-05 00:28:47.972414 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-05 00:28:47.972424 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-05 00:28:47.972435 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:28:47.972446 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-05 00:28:47.972457 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:28:47.972468 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-05 00:28:47.972479 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:28:47.972490 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:28:47.972553 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-05 00:28:47.972577 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-05 00:28:47.972589 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-05 00:28:47.972600 | orchestrator | 2026-03-05 00:28:47.972610 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-05 00:28:47.972626 | orchestrator | Thursday 05 March 2026 00:28:47 +0000 (0:00:00.737) 0:03:28.990 ******** 2026-03-05 00:28:47.972637 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-05 00:28:47.972649 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-05 00:28:47.972660 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-05 00:28:47.972671 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-05 00:28:47.972681 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-05 00:28:47.972700 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-05 00:28:55.085202 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-05 00:28:55.085315 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-05 00:28:55.085332 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-05 00:28:55.085344 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-05 00:28:55.085356 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-05 00:28:55.085367 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-05 00:28:55.085379 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-05 00:28:55.085415 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-05 00:28:55.085427 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-05 00:28:55.085438 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-05 00:28:55.085449 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-05 00:28:55.085460 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-05 00:28:55.085479 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:28:55.085597 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-05 00:28:55.085618 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-05 00:28:55.085635 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-05 00:28:55.085646 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-05 00:28:55.085657 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-05 00:28:55.085668 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-05 00:28:55.085679 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-05 00:28:55.085690 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:28:55.085701 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-05 00:28:55.085712 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-05 00:28:55.085723 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-05 00:28:55.085736 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-05 00:28:55.085749 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-05 00:28:55.085762 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-05 00:28:55.085775 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-05 00:28:55.085788 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-05 00:28:55.085802 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:28:55.085815 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-05 00:28:55.085828 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-05 00:28:55.085846 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-05 00:28:55.085866 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-05 00:28:55.085887 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-05 00:28:55.085908 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-05 00:28:55.085939 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-05 00:28:55.085954 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:28:55.085967 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-05 00:28:55.085979 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-05 00:28:55.085992 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-05 00:28:55.086015 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-05 00:28:55.086079 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-05 00:28:55.086113 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-05 00:28:55.086127 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-05 00:28:55.086137 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-05 00:28:55.086148 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-05 00:28:55.086159 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-05 00:28:55.086170 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-05 00:28:55.086181 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-05 00:28:55.086192 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-05 00:28:55.086203 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-05 00:28:55.086213 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-05 00:28:55.086225 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-05 00:28:55.086235 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-05 00:28:55.086246 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-05 00:28:55.086257 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-05 00:28:55.086268 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-05 00:28:55.086278 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-05 00:28:55.086289 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-05 00:28:55.086300 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-05 00:28:55.086311 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-05 00:28:55.086322 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-05 00:28:55.086332 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-05 00:28:55.086343 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-05 00:28:55.086354 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-05 00:28:55.086365 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-05 00:28:55.086376 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-05 00:28:55.086387 | orchestrator | 2026-03-05 00:28:55.086398 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-05 00:28:55.086409 | orchestrator | Thursday 05 March 2026 00:28:53 +0000 (0:00:05.177) 0:03:34.167 ******** 2026-03-05 00:28:55.086420 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-05 00:28:55.086431 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-05 00:28:55.086448 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-05 00:28:55.086465 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-05 00:28:55.086525 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-05 00:28:55.086545 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-05 00:28:55.086563 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-05 00:28:55.086582 | orchestrator | 2026-03-05 00:28:55.086602 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-05 00:28:55.086621 | orchestrator | Thursday 05 March 2026 00:28:54 +0000 (0:00:01.537) 0:03:35.705 ******** 2026-03-05 00:28:55.086641 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-05 00:28:55.086670 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-05 00:28:55.086690 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-05 00:28:55.086711 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:28:55.086730 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:28:55.086746 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:28:55.086757 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-05 00:28:55.086768 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:28:55.086779 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-05 00:28:55.086790 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-05 00:28:55.086817 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-05 00:29:08.695339 | orchestrator | 2026-03-05 00:29:08.695431 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-05 00:29:08.695440 | orchestrator | Thursday 05 March 2026 00:28:55 +0000 (0:00:00.494) 0:03:36.200 ******** 2026-03-05 00:29:08.695447 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-05 00:29:08.695454 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:29:08.695462 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-05 00:29:08.695469 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-05 00:29:08.695475 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:29:08.695523 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:29:08.695530 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-05 00:29:08.695536 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:29:08.695543 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-05 00:29:08.695549 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-05 00:29:08.695556 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-05 00:29:08.695562 | orchestrator | 2026-03-05 00:29:08.695569 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-05 00:29:08.695576 | orchestrator | Thursday 05 March 2026 00:28:55 +0000 (0:00:00.653) 0:03:36.853 ******** 2026-03-05 00:29:08.695582 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-05 00:29:08.695589 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-05 00:29:08.695595 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:29:08.695602 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-05 00:29:08.695628 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:29:08.695635 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:29:08.695641 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-05 00:29:08.695647 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:29:08.695653 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-05 00:29:08.695660 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-05 00:29:08.695666 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-05 00:29:08.695672 | orchestrator | 2026-03-05 00:29:08.695679 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-05 00:29:08.695685 | orchestrator | Thursday 05 March 2026 00:28:56 +0000 (0:00:00.525) 0:03:37.379 ******** 2026-03-05 00:29:08.695691 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:29:08.695698 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:29:08.695704 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:29:08.695710 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:29:08.695717 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:29:08.695723 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:29:08.695729 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:29:08.695735 | orchestrator | 2026-03-05 00:29:08.695741 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-05 00:29:08.695748 | orchestrator | Thursday 05 March 2026 00:28:56 +0000 (0:00:00.343) 0:03:37.722 ******** 2026-03-05 00:29:08.695754 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:29:08.695761 | orchestrator | ok: [testbed-manager] 2026-03-05 00:29:08.695768 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:29:08.695774 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:29:08.695780 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:29:08.695786 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:29:08.695792 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:29:08.695798 | orchestrator | 2026-03-05 00:29:08.695805 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-05 00:29:08.695811 | orchestrator | Thursday 05 March 2026 00:29:02 +0000 (0:00:05.964) 0:03:43.687 ******** 2026-03-05 00:29:08.695818 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-05 00:29:08.695824 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-05 00:29:08.695830 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:29:08.695836 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-05 00:29:08.695843 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:29:08.695849 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:29:08.695855 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-05 00:29:08.695861 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-05 00:29:08.695868 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:29:08.695874 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-05 00:29:08.695880 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:29:08.695886 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:29:08.695903 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-05 00:29:08.695910 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:29:08.695918 | orchestrator | 2026-03-05 00:29:08.695925 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-05 00:29:08.695942 | orchestrator | Thursday 05 March 2026 00:29:03 +0000 (0:00:00.472) 0:03:44.159 ******** 2026-03-05 00:29:08.695956 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-05 00:29:08.695963 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-05 00:29:08.695971 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-05 00:29:08.695991 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-05 00:29:08.696000 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-05 00:29:08.696007 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-05 00:29:08.696020 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-05 00:29:08.696027 | orchestrator | 2026-03-05 00:29:08.696034 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-05 00:29:08.696041 | orchestrator | Thursday 05 March 2026 00:29:04 +0000 (0:00:01.046) 0:03:45.206 ******** 2026-03-05 00:29:08.696050 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:29:08.696061 | orchestrator | 2026-03-05 00:29:08.696068 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-05 00:29:08.696075 | orchestrator | Thursday 05 March 2026 00:29:04 +0000 (0:00:00.463) 0:03:45.670 ******** 2026-03-05 00:29:08.696082 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:29:08.696089 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:29:08.696096 | orchestrator | ok: [testbed-manager] 2026-03-05 00:29:08.696104 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:29:08.696112 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:29:08.696119 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:29:08.696126 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:29:08.696134 | orchestrator | 2026-03-05 00:29:08.696141 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-05 00:29:08.696149 | orchestrator | Thursday 05 March 2026 00:29:06 +0000 (0:00:01.478) 0:03:47.149 ******** 2026-03-05 00:29:08.696157 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:29:08.696163 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:29:08.696169 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:29:08.696175 | orchestrator | ok: [testbed-manager] 2026-03-05 00:29:08.696181 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:29:08.696187 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:29:08.696193 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:29:08.696200 | orchestrator | 2026-03-05 00:29:08.696206 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-05 00:29:08.696212 | orchestrator | Thursday 05 March 2026 00:29:06 +0000 (0:00:00.697) 0:03:47.846 ******** 2026-03-05 00:29:08.696218 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:29:08.696240 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:29:08.696246 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:29:08.696253 | orchestrator | changed: [testbed-manager] 2026-03-05 00:29:08.696259 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:29:08.696265 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:29:08.696271 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:29:08.696277 | orchestrator | 2026-03-05 00:29:08.696283 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-05 00:29:08.696290 | orchestrator | Thursday 05 March 2026 00:29:07 +0000 (0:00:00.651) 0:03:48.497 ******** 2026-03-05 00:29:08.696296 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:29:08.696302 | orchestrator | ok: [testbed-manager] 2026-03-05 00:29:08.696308 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:29:08.696314 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:29:08.696321 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:29:08.696327 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:29:08.696333 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:29:08.696339 | orchestrator | 2026-03-05 00:29:08.696345 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-05 00:29:08.696351 | orchestrator | Thursday 05 March 2026 00:29:08 +0000 (0:00:00.742) 0:03:49.240 ******** 2026-03-05 00:29:08.696360 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772669105.2080405, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:29:08.696377 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772669135.3943412, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:29:08.696384 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772669131.6038046, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:29:08.696442 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772669130.937156, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:29:14.157712 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772669129.2154346, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:29:14.157813 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772669131.2393863, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:29:14.157825 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772669140.7294924, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:29:14.157833 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:29:14.157862 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:29:14.157883 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:29:14.157891 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:29:14.157914 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:29:14.157923 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:29:14.157930 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:29:14.157938 | orchestrator | 2026-03-05 00:29:14.157948 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-05 00:29:14.157957 | orchestrator | Thursday 05 March 2026 00:29:09 +0000 (0:00:01.067) 0:03:50.308 ******** 2026-03-05 00:29:14.157965 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:29:14.157974 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:29:14.157987 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:29:14.157995 | orchestrator | changed: [testbed-manager] 2026-03-05 00:29:14.158002 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:29:14.158009 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:29:14.158060 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:29:14.158070 | orchestrator | 2026-03-05 00:29:14.158077 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-05 00:29:14.158085 | orchestrator | Thursday 05 March 2026 00:29:10 +0000 (0:00:01.097) 0:03:51.406 ******** 2026-03-05 00:29:14.158093 | orchestrator | changed: [testbed-manager] 2026-03-05 00:29:14.158100 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:29:14.158108 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:29:14.158115 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:29:14.158123 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:29:14.158130 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:29:14.158137 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:29:14.158144 | orchestrator | 2026-03-05 00:29:14.158151 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-05 00:29:14.158159 | orchestrator | Thursday 05 March 2026 00:29:11 +0000 (0:00:01.188) 0:03:52.594 ******** 2026-03-05 00:29:14.158166 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:29:14.158173 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:29:14.158181 | orchestrator | changed: [testbed-manager] 2026-03-05 00:29:14.158188 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:29:14.158196 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:29:14.158203 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:29:14.158210 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:29:14.158217 | orchestrator | 2026-03-05 00:29:14.158224 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-05 00:29:14.158237 | orchestrator | Thursday 05 March 2026 00:29:12 +0000 (0:00:01.151) 0:03:53.746 ******** 2026-03-05 00:29:14.158244 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:29:14.158252 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:29:14.158261 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:29:14.158269 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:29:14.158278 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:29:14.158285 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:29:14.158293 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:29:14.158302 | orchestrator | 2026-03-05 00:29:14.158310 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-05 00:29:14.158319 | orchestrator | Thursday 05 March 2026 00:29:12 +0000 (0:00:00.301) 0:03:54.047 ******** 2026-03-05 00:29:14.158327 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:29:14.158336 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:29:14.158344 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:29:14.158353 | orchestrator | ok: [testbed-manager] 2026-03-05 00:29:14.158362 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:29:14.158370 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:29:14.158386 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:29:14.158401 | orchestrator | 2026-03-05 00:29:14.158408 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-05 00:29:14.158416 | orchestrator | Thursday 05 March 2026 00:29:13 +0000 (0:00:00.773) 0:03:54.821 ******** 2026-03-05 00:29:14.158433 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:29:14.158447 | orchestrator | 2026-03-05 00:29:14.158455 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-05 00:29:14.158489 | orchestrator | Thursday 05 March 2026 00:29:14 +0000 (0:00:00.423) 0:03:55.244 ******** 2026-03-05 00:30:31.129972 | orchestrator | ok: [testbed-manager] 2026-03-05 00:30:31.130102 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:30:31.130111 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:30:31.130132 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:30:31.130137 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:30:31.130141 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:30:31.130145 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:30:31.130151 | orchestrator | 2026-03-05 00:30:31.130156 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-05 00:30:31.130162 | orchestrator | Thursday 05 March 2026 00:29:22 +0000 (0:00:07.959) 0:04:03.203 ******** 2026-03-05 00:30:31.130167 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:30:31.130171 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:30:31.130175 | orchestrator | ok: [testbed-manager] 2026-03-05 00:30:31.130179 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:30:31.130184 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:30:31.130188 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:30:31.130192 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:30:31.130196 | orchestrator | 2026-03-05 00:30:31.130201 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-05 00:30:31.130205 | orchestrator | Thursday 05 March 2026 00:29:23 +0000 (0:00:01.379) 0:04:04.583 ******** 2026-03-05 00:30:31.130210 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:30:31.130214 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:30:31.130218 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:30:31.130222 | orchestrator | ok: [testbed-manager] 2026-03-05 00:30:31.130226 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:30:31.130230 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:30:31.130235 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:30:31.130239 | orchestrator | 2026-03-05 00:30:31.130243 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-05 00:30:31.130247 | orchestrator | Thursday 05 March 2026 00:29:24 +0000 (0:00:01.020) 0:04:05.603 ******** 2026-03-05 00:30:31.130251 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:30:31.130255 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:30:31.130260 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:30:31.130264 | orchestrator | ok: [testbed-manager] 2026-03-05 00:30:31.130268 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:30:31.130272 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:30:31.130276 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:30:31.130281 | orchestrator | 2026-03-05 00:30:31.130285 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-05 00:30:31.130290 | orchestrator | Thursday 05 March 2026 00:29:24 +0000 (0:00:00.284) 0:04:05.887 ******** 2026-03-05 00:30:31.130294 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:30:31.130298 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:30:31.130303 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:30:31.130307 | orchestrator | ok: [testbed-manager] 2026-03-05 00:30:31.130311 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:30:31.130315 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:30:31.130319 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:30:31.130323 | orchestrator | 2026-03-05 00:30:31.130327 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-05 00:30:31.130332 | orchestrator | Thursday 05 March 2026 00:29:25 +0000 (0:00:00.348) 0:04:06.236 ******** 2026-03-05 00:30:31.130336 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:30:31.130340 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:30:31.130344 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:30:31.130349 | orchestrator | ok: [testbed-manager] 2026-03-05 00:30:31.130353 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:30:31.130357 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:30:31.130361 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:30:31.130365 | orchestrator | 2026-03-05 00:30:31.130369 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-05 00:30:31.130374 | orchestrator | Thursday 05 March 2026 00:29:25 +0000 (0:00:00.275) 0:04:06.511 ******** 2026-03-05 00:30:31.130378 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:30:31.130382 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:30:31.130386 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:30:31.130395 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:30:31.130399 | orchestrator | ok: [testbed-manager] 2026-03-05 00:30:31.130403 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:30:31.130407 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:30:31.130411 | orchestrator | 2026-03-05 00:30:31.130416 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-05 00:30:31.130453 | orchestrator | Thursday 05 March 2026 00:29:30 +0000 (0:00:05.433) 0:04:11.945 ******** 2026-03-05 00:30:31.130460 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:30:31.130468 | orchestrator | 2026-03-05 00:30:31.130472 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-05 00:30:31.130477 | orchestrator | Thursday 05 March 2026 00:29:31 +0000 (0:00:00.421) 0:04:12.367 ******** 2026-03-05 00:30:31.130481 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-05 00:30:31.130486 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-05 00:30:31.130490 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-05 00:30:31.130495 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:30:31.130499 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-05 00:30:31.130503 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-05 00:30:31.130507 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-05 00:30:31.130512 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:30:31.130516 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-05 00:30:31.130520 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-05 00:30:31.130524 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:30:31.130528 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-05 00:30:31.130533 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-05 00:30:31.130537 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:30:31.130541 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-05 00:30:31.130547 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-05 00:30:31.130563 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:30:31.130568 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:30:31.130573 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-05 00:30:31.130578 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-05 00:30:31.130583 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:30:31.130588 | orchestrator | 2026-03-05 00:30:31.130593 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-05 00:30:31.130597 | orchestrator | Thursday 05 March 2026 00:29:31 +0000 (0:00:00.339) 0:04:12.706 ******** 2026-03-05 00:30:31.130603 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:30:31.130607 | orchestrator | 2026-03-05 00:30:31.130612 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-05 00:30:31.130617 | orchestrator | Thursday 05 March 2026 00:29:32 +0000 (0:00:00.431) 0:04:13.138 ******** 2026-03-05 00:30:31.130623 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-05 00:30:31.130627 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-05 00:30:31.130632 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:30:31.130637 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-05 00:30:31.130642 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:30:31.130647 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:30:31.130655 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-05 00:30:31.130660 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:30:31.130665 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-05 00:30:31.130682 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-05 00:30:31.130687 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:30:31.130692 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:30:31.130697 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-05 00:30:31.130702 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:30:31.130706 | orchestrator | 2026-03-05 00:30:31.130711 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-05 00:30:31.130716 | orchestrator | Thursday 05 March 2026 00:29:32 +0000 (0:00:00.312) 0:04:13.451 ******** 2026-03-05 00:30:31.130721 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:30:31.130727 | orchestrator | 2026-03-05 00:30:31.130731 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-05 00:30:31.130736 | orchestrator | Thursday 05 March 2026 00:29:32 +0000 (0:00:00.452) 0:04:13.903 ******** 2026-03-05 00:30:31.130741 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:30:31.130746 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:30:31.130751 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:30:31.130756 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:30:31.130761 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:30:31.130766 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:30:31.130771 | orchestrator | changed: [testbed-manager] 2026-03-05 00:30:31.130776 | orchestrator | 2026-03-05 00:30:31.130781 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-05 00:30:31.130786 | orchestrator | Thursday 05 March 2026 00:30:07 +0000 (0:00:34.452) 0:04:48.356 ******** 2026-03-05 00:30:31.130791 | orchestrator | changed: [testbed-manager] 2026-03-05 00:30:31.130795 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:30:31.130799 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:30:31.130803 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:30:31.130807 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:30:31.130811 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:30:31.130818 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:30:31.130822 | orchestrator | 2026-03-05 00:30:31.130827 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-05 00:30:31.130831 | orchestrator | Thursday 05 March 2026 00:30:15 +0000 (0:00:07.952) 0:04:56.308 ******** 2026-03-05 00:30:31.130835 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:30:31.130839 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:30:31.130843 | orchestrator | changed: [testbed-manager] 2026-03-05 00:30:31.130847 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:30:31.130851 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:30:31.130855 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:30:31.130859 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:30:31.130864 | orchestrator | 2026-03-05 00:30:31.130868 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-05 00:30:31.130872 | orchestrator | Thursday 05 March 2026 00:30:23 +0000 (0:00:07.993) 0:05:04.302 ******** 2026-03-05 00:30:31.130876 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:30:31.130880 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:30:31.130884 | orchestrator | ok: [testbed-manager] 2026-03-05 00:30:31.130889 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:30:31.130893 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:30:31.130897 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:30:31.130901 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:30:31.130905 | orchestrator | 2026-03-05 00:30:31.130909 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-05 00:30:31.130916 | orchestrator | Thursday 05 March 2026 00:30:25 +0000 (0:00:01.888) 0:05:06.190 ******** 2026-03-05 00:30:31.130921 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:30:31.130925 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:30:31.130929 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:30:31.130933 | orchestrator | changed: [testbed-manager] 2026-03-05 00:30:31.130937 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:30:31.130941 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:30:31.130945 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:30:31.130949 | orchestrator | 2026-03-05 00:30:31.130956 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-05 00:30:43.518208 | orchestrator | Thursday 05 March 2026 00:30:31 +0000 (0:00:06.022) 0:05:12.213 ******** 2026-03-05 00:30:43.518359 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:30:43.518375 | orchestrator | 2026-03-05 00:30:43.518386 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-05 00:30:43.518396 | orchestrator | Thursday 05 March 2026 00:30:31 +0000 (0:00:00.416) 0:05:12.630 ******** 2026-03-05 00:30:43.518405 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:30:43.518454 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:30:43.518463 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:30:43.518472 | orchestrator | changed: [testbed-manager] 2026-03-05 00:30:43.518481 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:30:43.518489 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:30:43.518498 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:30:43.518507 | orchestrator | 2026-03-05 00:30:43.518516 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-05 00:30:43.518525 | orchestrator | Thursday 05 March 2026 00:30:32 +0000 (0:00:00.782) 0:05:13.412 ******** 2026-03-05 00:30:43.518534 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:30:43.518544 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:30:43.518552 | orchestrator | ok: [testbed-manager] 2026-03-05 00:30:43.518561 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:30:43.518569 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:30:43.518578 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:30:43.518586 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:30:43.518595 | orchestrator | 2026-03-05 00:30:43.518604 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-05 00:30:43.518612 | orchestrator | Thursday 05 March 2026 00:30:34 +0000 (0:00:01.780) 0:05:15.192 ******** 2026-03-05 00:30:43.518621 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:30:43.518630 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:30:43.518638 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:30:43.518647 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:30:43.518655 | orchestrator | changed: [testbed-manager] 2026-03-05 00:30:43.518664 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:30:43.518673 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:30:43.518681 | orchestrator | 2026-03-05 00:30:43.518690 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-05 00:30:43.518699 | orchestrator | Thursday 05 March 2026 00:30:35 +0000 (0:00:01.738) 0:05:16.931 ******** 2026-03-05 00:30:43.518707 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:30:43.518716 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:30:43.518724 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:30:43.518733 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:30:43.518741 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:30:43.518750 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:30:43.518759 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:30:43.518767 | orchestrator | 2026-03-05 00:30:43.518776 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-05 00:30:43.518808 | orchestrator | Thursday 05 March 2026 00:30:36 +0000 (0:00:00.264) 0:05:17.195 ******** 2026-03-05 00:30:43.518817 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:30:43.518825 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:30:43.518834 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:30:43.518842 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:30:43.518851 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:30:43.518859 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:30:43.518868 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:30:43.518876 | orchestrator | 2026-03-05 00:30:43.518885 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-05 00:30:43.518894 | orchestrator | Thursday 05 March 2026 00:30:36 +0000 (0:00:00.459) 0:05:17.655 ******** 2026-03-05 00:30:43.518903 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:30:43.518911 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:30:43.518920 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:30:43.518928 | orchestrator | ok: [testbed-manager] 2026-03-05 00:30:43.518937 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:30:43.518958 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:30:43.518967 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:30:43.518975 | orchestrator | 2026-03-05 00:30:43.518984 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-05 00:30:43.518993 | orchestrator | Thursday 05 March 2026 00:30:36 +0000 (0:00:00.314) 0:05:17.970 ******** 2026-03-05 00:30:43.519001 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:30:43.519010 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:30:43.519019 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:30:43.519027 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:30:43.519036 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:30:43.519044 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:30:43.519053 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:30:43.519061 | orchestrator | 2026-03-05 00:30:43.519070 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-05 00:30:43.519079 | orchestrator | Thursday 05 March 2026 00:30:37 +0000 (0:00:00.302) 0:05:18.272 ******** 2026-03-05 00:30:43.519088 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:30:43.519097 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:30:43.519105 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:30:43.519113 | orchestrator | ok: [testbed-manager] 2026-03-05 00:30:43.519122 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:30:43.519130 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:30:43.519139 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:30:43.519147 | orchestrator | 2026-03-05 00:30:43.519156 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-05 00:30:43.519165 | orchestrator | Thursday 05 March 2026 00:30:37 +0000 (0:00:00.316) 0:05:18.589 ******** 2026-03-05 00:30:43.519173 | orchestrator | ok: [testbed-node-3] =>  2026-03-05 00:30:43.519182 | orchestrator |  docker_version: 5:27.5.1 2026-03-05 00:30:43.519190 | orchestrator | ok: [testbed-node-4] =>  2026-03-05 00:30:43.519199 | orchestrator |  docker_version: 5:27.5.1 2026-03-05 00:30:43.519208 | orchestrator | ok: [testbed-node-5] =>  2026-03-05 00:30:43.519216 | orchestrator |  docker_version: 5:27.5.1 2026-03-05 00:30:43.519225 | orchestrator | ok: [testbed-manager] =>  2026-03-05 00:30:43.519233 | orchestrator |  docker_version: 5:27.5.1 2026-03-05 00:30:43.519256 | orchestrator | ok: [testbed-node-0] =>  2026-03-05 00:30:43.519266 | orchestrator |  docker_version: 5:27.5.1 2026-03-05 00:30:43.519274 | orchestrator | ok: [testbed-node-1] =>  2026-03-05 00:30:43.519283 | orchestrator |  docker_version: 5:27.5.1 2026-03-05 00:30:43.519291 | orchestrator | ok: [testbed-node-2] =>  2026-03-05 00:30:43.519300 | orchestrator |  docker_version: 5:27.5.1 2026-03-05 00:30:43.519308 | orchestrator | 2026-03-05 00:30:43.519317 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-05 00:30:43.519325 | orchestrator | Thursday 05 March 2026 00:30:37 +0000 (0:00:00.303) 0:05:18.893 ******** 2026-03-05 00:30:43.519340 | orchestrator | ok: [testbed-node-3] =>  2026-03-05 00:30:43.519349 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-05 00:30:43.519358 | orchestrator | ok: [testbed-node-4] =>  2026-03-05 00:30:43.519366 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-05 00:30:43.519375 | orchestrator | ok: [testbed-node-5] =>  2026-03-05 00:30:43.519383 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-05 00:30:43.519392 | orchestrator | ok: [testbed-manager] =>  2026-03-05 00:30:43.519400 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-05 00:30:43.519408 | orchestrator | ok: [testbed-node-0] =>  2026-03-05 00:30:43.519447 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-05 00:30:43.519455 | orchestrator | ok: [testbed-node-1] =>  2026-03-05 00:30:43.519464 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-05 00:30:43.519472 | orchestrator | ok: [testbed-node-2] =>  2026-03-05 00:30:43.519481 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-05 00:30:43.519489 | orchestrator | 2026-03-05 00:30:43.519498 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-05 00:30:43.519507 | orchestrator | Thursday 05 March 2026 00:30:38 +0000 (0:00:00.295) 0:05:19.188 ******** 2026-03-05 00:30:43.519515 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:30:43.519524 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:30:43.519532 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:30:43.519541 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:30:43.519549 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:30:43.519558 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:30:43.519566 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:30:43.519575 | orchestrator | 2026-03-05 00:30:43.519583 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-05 00:30:43.519592 | orchestrator | Thursday 05 March 2026 00:30:38 +0000 (0:00:00.294) 0:05:19.483 ******** 2026-03-05 00:30:43.519601 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:30:43.519609 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:30:43.519618 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:30:43.519626 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:30:43.519635 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:30:43.519643 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:30:43.519652 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:30:43.519660 | orchestrator | 2026-03-05 00:30:43.519669 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-05 00:30:43.519678 | orchestrator | Thursday 05 March 2026 00:30:38 +0000 (0:00:00.330) 0:05:19.814 ******** 2026-03-05 00:30:43.519688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:30:43.519698 | orchestrator | 2026-03-05 00:30:43.519707 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-05 00:30:43.519716 | orchestrator | Thursday 05 March 2026 00:30:39 +0000 (0:00:00.557) 0:05:20.372 ******** 2026-03-05 00:30:43.519724 | orchestrator | ok: [testbed-manager] 2026-03-05 00:30:43.519733 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:30:43.519742 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:30:43.519750 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:30:43.519759 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:30:43.519767 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:30:43.519776 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:30:43.519784 | orchestrator | 2026-03-05 00:30:43.519793 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-05 00:30:43.519801 | orchestrator | Thursday 05 March 2026 00:30:40 +0000 (0:00:00.827) 0:05:21.199 ******** 2026-03-05 00:30:43.519814 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:30:43.519823 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:30:43.519832 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:30:43.519840 | orchestrator | ok: [testbed-manager] 2026-03-05 00:30:43.519855 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:30:43.519864 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:30:43.519873 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:30:43.519881 | orchestrator | 2026-03-05 00:30:43.519890 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-05 00:30:43.519899 | orchestrator | Thursday 05 March 2026 00:30:43 +0000 (0:00:02.965) 0:05:24.165 ******** 2026-03-05 00:30:43.519908 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-05 00:30:43.519917 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-05 00:30:43.519925 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-05 00:30:43.519935 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-05 00:30:43.519943 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-05 00:30:43.519951 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-05 00:30:43.519960 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:30:43.519969 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-05 00:30:43.519977 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-05 00:30:43.519986 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-05 00:30:43.519994 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:30:43.520003 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-05 00:30:43.520012 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-05 00:30:43.520020 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:30:43.520029 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-05 00:30:43.520037 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-05 00:30:43.520051 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-05 00:31:45.049468 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-05 00:31:45.049575 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:31:45.049607 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-05 00:31:45.049633 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-05 00:31:45.049648 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-05 00:31:45.049663 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:31:45.049672 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:31:45.049680 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-05 00:31:45.049688 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-05 00:31:45.049696 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-05 00:31:45.049704 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:31:45.049713 | orchestrator | 2026-03-05 00:31:45.049722 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-05 00:31:45.049732 | orchestrator | Thursday 05 March 2026 00:30:43 +0000 (0:00:00.665) 0:05:24.831 ******** 2026-03-05 00:31:45.049741 | orchestrator | ok: [testbed-manager] 2026-03-05 00:31:45.049749 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:31:45.049757 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:31:45.049765 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:31:45.049773 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:31:45.049781 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:31:45.049790 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:31:45.049798 | orchestrator | 2026-03-05 00:31:45.049806 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-05 00:31:45.049814 | orchestrator | Thursday 05 March 2026 00:30:50 +0000 (0:00:06.871) 0:05:31.702 ******** 2026-03-05 00:31:45.049822 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:31:45.049831 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:31:45.049838 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:31:45.049846 | orchestrator | ok: [testbed-manager] 2026-03-05 00:31:45.049855 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:31:45.049883 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:31:45.049891 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:31:45.049899 | orchestrator | 2026-03-05 00:31:45.049907 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-05 00:31:45.049915 | orchestrator | Thursday 05 March 2026 00:30:51 +0000 (0:00:01.109) 0:05:32.812 ******** 2026-03-05 00:31:45.049923 | orchestrator | ok: [testbed-manager] 2026-03-05 00:31:45.049932 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:31:45.049942 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:31:45.049951 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:31:45.049960 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:31:45.049970 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:31:45.049979 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:31:45.049989 | orchestrator | 2026-03-05 00:31:45.049999 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-05 00:31:45.050009 | orchestrator | Thursday 05 March 2026 00:30:59 +0000 (0:00:08.281) 0:05:41.093 ******** 2026-03-05 00:31:45.050127 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:31:45.050139 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:31:45.050148 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:31:45.050158 | orchestrator | changed: [testbed-manager] 2026-03-05 00:31:45.050167 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:31:45.050176 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:31:45.050186 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:31:45.050195 | orchestrator | 2026-03-05 00:31:45.050205 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-05 00:31:45.050215 | orchestrator | Thursday 05 March 2026 00:31:03 +0000 (0:00:03.670) 0:05:44.764 ******** 2026-03-05 00:31:45.050224 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:31:45.050234 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:31:45.050243 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:31:45.050252 | orchestrator | ok: [testbed-manager] 2026-03-05 00:31:45.050266 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:31:45.050281 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:31:45.050294 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:31:45.050308 | orchestrator | 2026-03-05 00:31:45.050338 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-05 00:31:45.050354 | orchestrator | Thursday 05 March 2026 00:31:05 +0000 (0:00:01.590) 0:05:46.355 ******** 2026-03-05 00:31:45.050363 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:31:45.050371 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:31:45.050378 | orchestrator | ok: [testbed-manager] 2026-03-05 00:31:45.050410 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:31:45.050419 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:31:45.050427 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:31:45.050434 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:31:45.050442 | orchestrator | 2026-03-05 00:31:45.050450 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-05 00:31:45.050458 | orchestrator | Thursday 05 March 2026 00:31:06 +0000 (0:00:01.413) 0:05:47.768 ******** 2026-03-05 00:31:45.050466 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:31:45.050475 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:31:45.050484 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:31:45.050497 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:31:45.050516 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:31:45.050530 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:31:45.050543 | orchestrator | changed: [testbed-manager] 2026-03-05 00:31:45.050555 | orchestrator | 2026-03-05 00:31:45.050569 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-05 00:31:45.050583 | orchestrator | Thursday 05 March 2026 00:31:07 +0000 (0:00:00.941) 0:05:48.710 ******** 2026-03-05 00:31:45.050596 | orchestrator | ok: [testbed-manager] 2026-03-05 00:31:45.050609 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:31:45.050623 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:31:45.050643 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:31:45.050652 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:31:45.050660 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:31:45.050667 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:31:45.050675 | orchestrator | 2026-03-05 00:31:45.050683 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-05 00:31:45.050708 | orchestrator | Thursday 05 March 2026 00:31:17 +0000 (0:00:09.501) 0:05:58.212 ******** 2026-03-05 00:31:45.050722 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:31:45.050735 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:31:45.050748 | orchestrator | changed: [testbed-manager] 2026-03-05 00:31:45.050762 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:31:45.050775 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:31:45.050787 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:31:45.050795 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:31:45.050803 | orchestrator | 2026-03-05 00:31:45.050811 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-05 00:31:45.050819 | orchestrator | Thursday 05 March 2026 00:31:18 +0000 (0:00:01.223) 0:05:59.436 ******** 2026-03-05 00:31:45.050827 | orchestrator | ok: [testbed-manager] 2026-03-05 00:31:45.050834 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:31:45.050842 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:31:45.050850 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:31:45.050858 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:31:45.050865 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:31:45.050873 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:31:45.050881 | orchestrator | 2026-03-05 00:31:45.050889 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-05 00:31:45.050897 | orchestrator | Thursday 05 March 2026 00:31:27 +0000 (0:00:09.288) 0:06:08.724 ******** 2026-03-05 00:31:45.050905 | orchestrator | ok: [testbed-manager] 2026-03-05 00:31:45.050913 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:31:45.050920 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:31:45.050928 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:31:45.050936 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:31:45.050944 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:31:45.050951 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:31:45.050959 | orchestrator | 2026-03-05 00:31:45.050967 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-05 00:31:45.050975 | orchestrator | Thursday 05 March 2026 00:31:38 +0000 (0:00:10.609) 0:06:19.334 ******** 2026-03-05 00:31:45.050983 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-05 00:31:45.050991 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-05 00:31:45.050999 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-05 00:31:45.051007 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-05 00:31:45.051015 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-05 00:31:45.051022 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-05 00:31:45.051030 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-05 00:31:45.051038 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-05 00:31:45.051046 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-05 00:31:45.051054 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-05 00:31:45.051061 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-05 00:31:45.051069 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-05 00:31:45.051077 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-05 00:31:45.051085 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-05 00:31:45.051093 | orchestrator | 2026-03-05 00:31:45.051101 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-05 00:31:45.051109 | orchestrator | Thursday 05 March 2026 00:31:39 +0000 (0:00:01.207) 0:06:20.541 ******** 2026-03-05 00:31:45.051125 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:31:45.051133 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:31:45.051141 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:31:45.051148 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:31:45.051156 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:31:45.051164 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:31:45.051172 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:31:45.051180 | orchestrator | 2026-03-05 00:31:45.051187 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-05 00:31:45.051195 | orchestrator | Thursday 05 March 2026 00:31:40 +0000 (0:00:00.560) 0:06:21.101 ******** 2026-03-05 00:31:45.051203 | orchestrator | ok: [testbed-manager] 2026-03-05 00:31:45.051211 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:31:45.051219 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:31:45.051227 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:31:45.051235 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:31:45.051243 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:31:45.051251 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:31:45.051258 | orchestrator | 2026-03-05 00:31:45.051267 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-05 00:31:45.051276 | orchestrator | Thursday 05 March 2026 00:31:43 +0000 (0:00:03.987) 0:06:25.089 ******** 2026-03-05 00:31:45.051284 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:31:45.051292 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:31:45.051299 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:31:45.051307 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:31:45.051315 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:31:45.051322 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:31:45.051330 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:31:45.051338 | orchestrator | 2026-03-05 00:31:45.051347 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-05 00:31:45.051355 | orchestrator | Thursday 05 March 2026 00:31:44 +0000 (0:00:00.746) 0:06:25.835 ******** 2026-03-05 00:31:45.051363 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-05 00:31:45.051370 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-05 00:31:45.051436 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:31:45.051447 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-05 00:31:45.051455 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-05 00:31:45.051463 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:31:45.051471 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-05 00:31:45.051482 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-05 00:31:45.051495 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:31:45.051516 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-05 00:32:04.771096 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-05 00:32:04.771292 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:32:04.771313 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-05 00:32:04.771335 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-05 00:32:04.771353 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:32:04.771371 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-05 00:32:04.771417 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-05 00:32:04.771436 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:32:04.771455 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-05 00:32:04.771474 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-05 00:32:04.771493 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:32:04.771513 | orchestrator | 2026-03-05 00:32:04.771535 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-05 00:32:04.771589 | orchestrator | Thursday 05 March 2026 00:31:45 +0000 (0:00:00.655) 0:06:26.491 ******** 2026-03-05 00:32:04.771609 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:32:04.771627 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:32:04.771646 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:32:04.771668 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:32:04.771687 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:32:04.771707 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:32:04.771728 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:32:04.771746 | orchestrator | 2026-03-05 00:32:04.771764 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-05 00:32:04.771778 | orchestrator | Thursday 05 March 2026 00:31:45 +0000 (0:00:00.536) 0:06:27.028 ******** 2026-03-05 00:32:04.771791 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:32:04.771804 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:32:04.771817 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:32:04.771830 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:32:04.771844 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:32:04.771856 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:32:04.771867 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:32:04.771878 | orchestrator | 2026-03-05 00:32:04.771889 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-05 00:32:04.771900 | orchestrator | Thursday 05 March 2026 00:31:46 +0000 (0:00:00.577) 0:06:27.606 ******** 2026-03-05 00:32:04.771911 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:32:04.771921 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:32:04.771932 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:32:04.771943 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:32:04.771953 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:32:04.771964 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:32:04.771974 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:32:04.771985 | orchestrator | 2026-03-05 00:32:04.771996 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-05 00:32:04.772007 | orchestrator | Thursday 05 March 2026 00:31:47 +0000 (0:00:00.685) 0:06:28.291 ******** 2026-03-05 00:32:04.772018 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:32:04.772029 | orchestrator | ok: [testbed-manager] 2026-03-05 00:32:04.772040 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:32:04.772051 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:32:04.772061 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:32:04.772072 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:32:04.772083 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:32:04.772093 | orchestrator | 2026-03-05 00:32:04.772104 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-05 00:32:04.772115 | orchestrator | Thursday 05 March 2026 00:31:49 +0000 (0:00:02.078) 0:06:30.369 ******** 2026-03-05 00:32:04.772128 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:32:04.772141 | orchestrator | 2026-03-05 00:32:04.772167 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-05 00:32:04.772179 | orchestrator | Thursday 05 March 2026 00:31:50 +0000 (0:00:00.915) 0:06:31.285 ******** 2026-03-05 00:32:04.772190 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:32:04.772200 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:32:04.772211 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:32:04.772221 | orchestrator | ok: [testbed-manager] 2026-03-05 00:32:04.772233 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:32:04.772243 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:32:04.772254 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:32:04.772265 | orchestrator | 2026-03-05 00:32:04.772276 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-05 00:32:04.772329 | orchestrator | Thursday 05 March 2026 00:31:51 +0000 (0:00:00.923) 0:06:32.208 ******** 2026-03-05 00:32:04.772348 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:32:04.772367 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:32:04.772425 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:32:04.772445 | orchestrator | ok: [testbed-manager] 2026-03-05 00:32:04.772464 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:32:04.772483 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:32:04.772501 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:32:04.772520 | orchestrator | 2026-03-05 00:32:04.772538 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-05 00:32:04.772556 | orchestrator | Thursday 05 March 2026 00:31:52 +0000 (0:00:01.060) 0:06:33.269 ******** 2026-03-05 00:32:04.772575 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:32:04.772594 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:32:04.772612 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:32:04.772631 | orchestrator | ok: [testbed-manager] 2026-03-05 00:32:04.772649 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:32:04.772668 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:32:04.772680 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:32:04.772691 | orchestrator | 2026-03-05 00:32:04.772705 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-05 00:32:04.772750 | orchestrator | Thursday 05 March 2026 00:31:53 +0000 (0:00:01.296) 0:06:34.566 ******** 2026-03-05 00:32:04.772770 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:32:04.772790 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:32:04.772808 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:32:04.772827 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:32:04.772845 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:32:04.772863 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:32:04.772881 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:32:04.772898 | orchestrator | 2026-03-05 00:32:04.772918 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-05 00:32:04.772937 | orchestrator | Thursday 05 March 2026 00:31:54 +0000 (0:00:01.379) 0:06:35.945 ******** 2026-03-05 00:32:04.772956 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:32:04.772974 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:32:04.772993 | orchestrator | ok: [testbed-manager] 2026-03-05 00:32:04.773012 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:32:04.773029 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:32:04.773047 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:32:04.773067 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:32:04.773084 | orchestrator | 2026-03-05 00:32:04.773102 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-05 00:32:04.773117 | orchestrator | Thursday 05 March 2026 00:31:56 +0000 (0:00:01.296) 0:06:37.241 ******** 2026-03-05 00:32:04.773128 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:32:04.773139 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:32:04.773150 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:32:04.773160 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:32:04.773171 | orchestrator | changed: [testbed-manager] 2026-03-05 00:32:04.773182 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:32:04.773193 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:32:04.773204 | orchestrator | 2026-03-05 00:32:04.773215 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-05 00:32:04.773226 | orchestrator | Thursday 05 March 2026 00:31:57 +0000 (0:00:01.268) 0:06:38.510 ******** 2026-03-05 00:32:04.773237 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:32:04.773249 | orchestrator | 2026-03-05 00:32:04.773260 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-05 00:32:04.773276 | orchestrator | Thursday 05 March 2026 00:31:58 +0000 (0:00:01.141) 0:06:39.651 ******** 2026-03-05 00:32:04.773316 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:32:04.773335 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:32:04.773355 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:32:04.773476 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:32:04.773493 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:32:04.773504 | orchestrator | ok: [testbed-manager] 2026-03-05 00:32:04.773515 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:32:04.773526 | orchestrator | 2026-03-05 00:32:04.773537 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-05 00:32:04.773548 | orchestrator | Thursday 05 March 2026 00:31:59 +0000 (0:00:01.301) 0:06:40.952 ******** 2026-03-05 00:32:04.773559 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:32:04.773570 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:32:04.773581 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:32:04.773591 | orchestrator | ok: [testbed-manager] 2026-03-05 00:32:04.773602 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:32:04.773613 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:32:04.773624 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:32:04.773635 | orchestrator | 2026-03-05 00:32:04.773646 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-05 00:32:04.773657 | orchestrator | Thursday 05 March 2026 00:32:00 +0000 (0:00:01.137) 0:06:42.089 ******** 2026-03-05 00:32:04.773668 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:32:04.773679 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:32:04.773690 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:32:04.773700 | orchestrator | ok: [testbed-manager] 2026-03-05 00:32:04.773710 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:32:04.773719 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:32:04.773729 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:32:04.773739 | orchestrator | 2026-03-05 00:32:04.773749 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-05 00:32:04.773759 | orchestrator | Thursday 05 March 2026 00:32:02 +0000 (0:00:01.133) 0:06:43.223 ******** 2026-03-05 00:32:04.773769 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:32:04.773778 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:32:04.773788 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:32:04.773797 | orchestrator | ok: [testbed-manager] 2026-03-05 00:32:04.773807 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:32:04.773816 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:32:04.773826 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:32:04.773836 | orchestrator | 2026-03-05 00:32:04.773845 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-05 00:32:04.773855 | orchestrator | Thursday 05 March 2026 00:32:03 +0000 (0:00:01.449) 0:06:44.672 ******** 2026-03-05 00:32:04.773865 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:32:04.773875 | orchestrator | 2026-03-05 00:32:04.773885 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-05 00:32:04.773894 | orchestrator | Thursday 05 March 2026 00:32:04 +0000 (0:00:01.041) 0:06:45.713 ******** 2026-03-05 00:32:04.773904 | orchestrator | 2026-03-05 00:32:04.773914 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-05 00:32:04.773923 | orchestrator | Thursday 05 March 2026 00:32:04 +0000 (0:00:00.045) 0:06:45.759 ******** 2026-03-05 00:32:04.773933 | orchestrator | 2026-03-05 00:32:04.773943 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-05 00:32:04.773952 | orchestrator | Thursday 05 March 2026 00:32:04 +0000 (0:00:00.042) 0:06:45.801 ******** 2026-03-05 00:32:04.773962 | orchestrator | 2026-03-05 00:32:04.773972 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-05 00:32:04.773992 | orchestrator | Thursday 05 March 2026 00:32:04 +0000 (0:00:00.049) 0:06:45.850 ******** 2026-03-05 00:32:32.620005 | orchestrator | 2026-03-05 00:32:32.620139 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-05 00:32:32.620154 | orchestrator | Thursday 05 March 2026 00:32:04 +0000 (0:00:00.048) 0:06:45.899 ******** 2026-03-05 00:32:32.620164 | orchestrator | 2026-03-05 00:32:32.620174 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-05 00:32:32.620184 | orchestrator | Thursday 05 March 2026 00:32:04 +0000 (0:00:00.040) 0:06:45.939 ******** 2026-03-05 00:32:32.620193 | orchestrator | 2026-03-05 00:32:32.620203 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-05 00:32:32.620213 | orchestrator | Thursday 05 March 2026 00:32:04 +0000 (0:00:00.039) 0:06:45.979 ******** 2026-03-05 00:32:32.620222 | orchestrator | 2026-03-05 00:32:32.620232 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-05 00:32:32.620241 | orchestrator | Thursday 05 March 2026 00:32:04 +0000 (0:00:00.046) 0:06:46.026 ******** 2026-03-05 00:32:32.620251 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:32:32.620261 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:32:32.620271 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:32:32.620280 | orchestrator | 2026-03-05 00:32:32.620290 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-05 00:32:32.620299 | orchestrator | Thursday 05 March 2026 00:32:06 +0000 (0:00:01.272) 0:06:47.298 ******** 2026-03-05 00:32:32.620309 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:32:32.620319 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:32:32.620329 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:32:32.620338 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:32:32.620348 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:32:32.620416 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:32:32.620436 | orchestrator | changed: [testbed-manager] 2026-03-05 00:32:32.620453 | orchestrator | 2026-03-05 00:32:32.620468 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-05 00:32:32.620478 | orchestrator | Thursday 05 March 2026 00:32:08 +0000 (0:00:02.370) 0:06:49.668 ******** 2026-03-05 00:32:32.620488 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:32:32.620497 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:32:32.620507 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:32:32.620516 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:32:32.620525 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:32:32.620535 | orchestrator | changed: [testbed-manager] 2026-03-05 00:32:32.620547 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:32:32.620557 | orchestrator | 2026-03-05 00:32:32.620570 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-05 00:32:32.620583 | orchestrator | Thursday 05 March 2026 00:32:09 +0000 (0:00:01.224) 0:06:50.893 ******** 2026-03-05 00:32:32.620595 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:32:32.620608 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:32:32.620621 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:32:32.620633 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:32:32.620646 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:32:32.620658 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:32:32.620671 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:32:32.620684 | orchestrator | 2026-03-05 00:32:32.620696 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-05 00:32:32.620708 | orchestrator | Thursday 05 March 2026 00:32:12 +0000 (0:00:02.292) 0:06:53.185 ******** 2026-03-05 00:32:32.620720 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:32:32.620733 | orchestrator | 2026-03-05 00:32:32.620746 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-05 00:32:32.620758 | orchestrator | Thursday 05 March 2026 00:32:12 +0000 (0:00:00.112) 0:06:53.297 ******** 2026-03-05 00:32:32.620771 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:32:32.620783 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:32:32.620796 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:32:32.620810 | orchestrator | ok: [testbed-manager] 2026-03-05 00:32:32.620832 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:32:32.620845 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:32:32.620858 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:32:32.620870 | orchestrator | 2026-03-05 00:32:32.620898 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-05 00:32:32.620913 | orchestrator | Thursday 05 March 2026 00:32:13 +0000 (0:00:01.106) 0:06:54.404 ******** 2026-03-05 00:32:32.620925 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:32:32.620937 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:32:32.620947 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:32:32.620958 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:32:32.620968 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:32:32.620979 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:32:32.620990 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:32:32.621000 | orchestrator | 2026-03-05 00:32:32.621011 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-05 00:32:32.621022 | orchestrator | Thursday 05 March 2026 00:32:14 +0000 (0:00:00.783) 0:06:55.187 ******** 2026-03-05 00:32:32.621034 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:32:32.621047 | orchestrator | 2026-03-05 00:32:32.621058 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-05 00:32:32.621069 | orchestrator | Thursday 05 March 2026 00:32:15 +0000 (0:00:01.022) 0:06:56.210 ******** 2026-03-05 00:32:32.621080 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:32:32.621091 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:32:32.621101 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:32:32.621112 | orchestrator | ok: [testbed-manager] 2026-03-05 00:32:32.621123 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:32:32.621133 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:32:32.621144 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:32:32.621155 | orchestrator | 2026-03-05 00:32:32.621166 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-05 00:32:32.621177 | orchestrator | Thursday 05 March 2026 00:32:16 +0000 (0:00:00.912) 0:06:57.123 ******** 2026-03-05 00:32:32.621188 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-05 00:32:32.621218 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-05 00:32:32.621230 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-05 00:32:32.621241 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-05 00:32:32.621251 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-05 00:32:32.621262 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-05 00:32:32.621273 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-05 00:32:32.621283 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-05 00:32:32.621294 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-05 00:32:32.621305 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-05 00:32:32.621316 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-05 00:32:32.621326 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-05 00:32:32.621337 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-05 00:32:32.621347 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-05 00:32:32.621419 | orchestrator | 2026-03-05 00:32:32.621433 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-05 00:32:32.621444 | orchestrator | Thursday 05 March 2026 00:32:18 +0000 (0:00:02.790) 0:06:59.913 ******** 2026-03-05 00:32:32.621455 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:32:32.621466 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:32:32.621477 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:32:32.621497 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:32:32.621508 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:32:32.621519 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:32:32.621530 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:32:32.621540 | orchestrator | 2026-03-05 00:32:32.621551 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-05 00:32:32.621562 | orchestrator | Thursday 05 March 2026 00:32:19 +0000 (0:00:00.587) 0:07:00.501 ******** 2026-03-05 00:32:32.621575 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:32:32.621588 | orchestrator | 2026-03-05 00:32:32.621599 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-05 00:32:32.621610 | orchestrator | Thursday 05 March 2026 00:32:20 +0000 (0:00:00.959) 0:07:01.461 ******** 2026-03-05 00:32:32.621621 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:32:32.621632 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:32:32.621642 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:32:32.621653 | orchestrator | ok: [testbed-manager] 2026-03-05 00:32:32.621664 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:32:32.621674 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:32:32.621685 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:32:32.621696 | orchestrator | 2026-03-05 00:32:32.621707 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-05 00:32:32.621718 | orchestrator | Thursday 05 March 2026 00:32:21 +0000 (0:00:00.897) 0:07:02.358 ******** 2026-03-05 00:32:32.621728 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:32:32.621739 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:32:32.621749 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:32:32.621760 | orchestrator | ok: [testbed-manager] 2026-03-05 00:32:32.621770 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:32:32.621781 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:32:32.621792 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:32:32.621802 | orchestrator | 2026-03-05 00:32:32.621813 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-05 00:32:32.621824 | orchestrator | Thursday 05 March 2026 00:32:22 +0000 (0:00:01.100) 0:07:03.458 ******** 2026-03-05 00:32:32.621835 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:32:32.621846 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:32:32.621864 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:32:32.621875 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:32:32.621886 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:32:32.621896 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:32:32.621907 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:32:32.621918 | orchestrator | 2026-03-05 00:32:32.621928 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-05 00:32:32.621939 | orchestrator | Thursday 05 March 2026 00:32:22 +0000 (0:00:00.613) 0:07:04.071 ******** 2026-03-05 00:32:32.621950 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:32:32.621961 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:32:32.621971 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:32:32.621982 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:32:32.621993 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:32:32.622003 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:32:32.622014 | orchestrator | ok: [testbed-manager] 2026-03-05 00:32:32.622104 | orchestrator | 2026-03-05 00:32:32.622116 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-05 00:32:32.622127 | orchestrator | Thursday 05 March 2026 00:32:24 +0000 (0:00:01.468) 0:07:05.539 ******** 2026-03-05 00:32:32.622137 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:32:32.622148 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:32:32.622159 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:32:32.622170 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:32:32.622187 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:32:32.622198 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:32:32.622209 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:32:32.622219 | orchestrator | 2026-03-05 00:32:32.622230 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-05 00:32:32.622241 | orchestrator | Thursday 05 March 2026 00:32:24 +0000 (0:00:00.547) 0:07:06.086 ******** 2026-03-05 00:32:32.622252 | orchestrator | ok: [testbed-manager] 2026-03-05 00:32:32.622262 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:32:32.622273 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:32:32.622283 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:32:32.622294 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:32:32.622305 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:32:32.622340 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:33:05.294388 | orchestrator | 2026-03-05 00:33:05.294500 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-05 00:33:05.294515 | orchestrator | Thursday 05 March 2026 00:32:32 +0000 (0:00:07.691) 0:07:13.778 ******** 2026-03-05 00:33:05.294526 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:33:05.294536 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:33:05.294545 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:33:05.294554 | orchestrator | ok: [testbed-manager] 2026-03-05 00:33:05.294564 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:33:05.294572 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:33:05.294581 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:33:05.294590 | orchestrator | 2026-03-05 00:33:05.294600 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-05 00:33:05.294609 | orchestrator | Thursday 05 March 2026 00:32:34 +0000 (0:00:01.628) 0:07:15.407 ******** 2026-03-05 00:33:05.294617 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:33:05.294626 | orchestrator | ok: [testbed-manager] 2026-03-05 00:33:05.294635 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:33:05.294643 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:33:05.294652 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:33:05.294661 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:33:05.294670 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:33:05.294678 | orchestrator | 2026-03-05 00:33:05.294687 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-05 00:33:05.294696 | orchestrator | Thursday 05 March 2026 00:32:36 +0000 (0:00:01.689) 0:07:17.096 ******** 2026-03-05 00:33:05.294705 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:33:05.294720 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:33:05.294734 | orchestrator | ok: [testbed-manager] 2026-03-05 00:33:05.294757 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:33:05.294775 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:33:05.294789 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:33:05.294804 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:33:05.294818 | orchestrator | 2026-03-05 00:33:05.294832 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-05 00:33:05.294846 | orchestrator | Thursday 05 March 2026 00:32:37 +0000 (0:00:01.729) 0:07:18.825 ******** 2026-03-05 00:33:05.294860 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:33:05.294874 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:33:05.294888 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:33:05.294903 | orchestrator | ok: [testbed-manager] 2026-03-05 00:33:05.294919 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:33:05.294935 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:33:05.294950 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:33:05.294967 | orchestrator | 2026-03-05 00:33:05.294983 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-05 00:33:05.294998 | orchestrator | Thursday 05 March 2026 00:32:38 +0000 (0:00:01.062) 0:07:19.888 ******** 2026-03-05 00:33:05.295009 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:33:05.295020 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:33:05.295061 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:33:05.295082 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:33:05.295096 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:33:05.295110 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:33:05.295124 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:33:05.295138 | orchestrator | 2026-03-05 00:33:05.295152 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-05 00:33:05.295167 | orchestrator | Thursday 05 March 2026 00:32:39 +0000 (0:00:00.853) 0:07:20.742 ******** 2026-03-05 00:33:05.295181 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:33:05.295196 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:33:05.295212 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:33:05.295227 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:33:05.295242 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:33:05.295252 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:33:05.295260 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:33:05.295269 | orchestrator | 2026-03-05 00:33:05.295278 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-05 00:33:05.295287 | orchestrator | Thursday 05 March 2026 00:32:40 +0000 (0:00:00.548) 0:07:21.291 ******** 2026-03-05 00:33:05.295295 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:33:05.295304 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:33:05.295313 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:33:05.295322 | orchestrator | ok: [testbed-manager] 2026-03-05 00:33:05.295330 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:33:05.295364 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:33:05.295374 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:33:05.295383 | orchestrator | 2026-03-05 00:33:05.295392 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-05 00:33:05.295400 | orchestrator | Thursday 05 March 2026 00:32:40 +0000 (0:00:00.527) 0:07:21.818 ******** 2026-03-05 00:33:05.295409 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:33:05.295418 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:33:05.295426 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:33:05.295435 | orchestrator | ok: [testbed-manager] 2026-03-05 00:33:05.295443 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:33:05.295452 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:33:05.295460 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:33:05.295469 | orchestrator | 2026-03-05 00:33:05.295478 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-05 00:33:05.295486 | orchestrator | Thursday 05 March 2026 00:32:41 +0000 (0:00:00.749) 0:07:22.568 ******** 2026-03-05 00:33:05.295495 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:33:05.295503 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:33:05.295512 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:33:05.295520 | orchestrator | ok: [testbed-manager] 2026-03-05 00:33:05.295529 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:33:05.295537 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:33:05.295546 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:33:05.295554 | orchestrator | 2026-03-05 00:33:05.295563 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-05 00:33:05.295571 | orchestrator | Thursday 05 March 2026 00:32:42 +0000 (0:00:00.566) 0:07:23.134 ******** 2026-03-05 00:33:05.295580 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:33:05.295589 | orchestrator | ok: [testbed-manager] 2026-03-05 00:33:05.295597 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:33:05.295606 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:33:05.295614 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:33:05.295622 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:33:05.295631 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:33:05.295639 | orchestrator | 2026-03-05 00:33:05.295666 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-05 00:33:05.295675 | orchestrator | Thursday 05 March 2026 00:32:47 +0000 (0:00:05.453) 0:07:28.587 ******** 2026-03-05 00:33:05.295684 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:33:05.295702 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:33:05.295730 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:33:05.295739 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:33:05.295759 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:33:05.295768 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:33:05.295777 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:33:05.295785 | orchestrator | 2026-03-05 00:33:05.295794 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-05 00:33:05.295803 | orchestrator | Thursday 05 March 2026 00:32:48 +0000 (0:00:00.587) 0:07:29.175 ******** 2026-03-05 00:33:05.295814 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:33:05.295825 | orchestrator | 2026-03-05 00:33:05.295834 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-05 00:33:05.295843 | orchestrator | Thursday 05 March 2026 00:32:49 +0000 (0:00:01.054) 0:07:30.230 ******** 2026-03-05 00:33:05.295852 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:33:05.295860 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:33:05.295869 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:33:05.295878 | orchestrator | ok: [testbed-manager] 2026-03-05 00:33:05.295886 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:33:05.295895 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:33:05.295904 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:33:05.295912 | orchestrator | 2026-03-05 00:33:05.295921 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-05 00:33:05.295935 | orchestrator | Thursday 05 March 2026 00:32:50 +0000 (0:00:01.855) 0:07:32.086 ******** 2026-03-05 00:33:05.295950 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:33:05.295971 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:33:05.295990 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:33:05.296005 | orchestrator | ok: [testbed-manager] 2026-03-05 00:33:05.296019 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:33:05.296033 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:33:05.296047 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:33:05.296060 | orchestrator | 2026-03-05 00:33:05.296075 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-05 00:33:05.296088 | orchestrator | Thursday 05 March 2026 00:32:52 +0000 (0:00:01.155) 0:07:33.242 ******** 2026-03-05 00:33:05.296101 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:33:05.296114 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:33:05.296127 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:33:05.296140 | orchestrator | ok: [testbed-manager] 2026-03-05 00:33:05.296154 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:33:05.296168 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:33:05.296182 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:33:05.296195 | orchestrator | 2026-03-05 00:33:05.296209 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-05 00:33:05.296223 | orchestrator | Thursday 05 March 2026 00:32:53 +0000 (0:00:00.925) 0:07:34.168 ******** 2026-03-05 00:33:05.296238 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-05 00:33:05.296254 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-05 00:33:05.296270 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-05 00:33:05.296295 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-05 00:33:05.296311 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-05 00:33:05.296334 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-05 00:33:05.296377 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-05 00:33:05.296386 | orchestrator | 2026-03-05 00:33:05.296395 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-05 00:33:05.296404 | orchestrator | Thursday 05 March 2026 00:32:55 +0000 (0:00:02.042) 0:07:36.210 ******** 2026-03-05 00:33:05.296413 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:33:05.296423 | orchestrator | 2026-03-05 00:33:05.296432 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-05 00:33:05.296440 | orchestrator | Thursday 05 March 2026 00:32:55 +0000 (0:00:00.858) 0:07:37.069 ******** 2026-03-05 00:33:05.296449 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:33:05.296457 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:33:05.296466 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:33:05.296475 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:33:05.296483 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:33:05.296492 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:33:05.296501 | orchestrator | changed: [testbed-manager] 2026-03-05 00:33:05.296509 | orchestrator | 2026-03-05 00:33:05.296528 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-05 00:33:36.864522 | orchestrator | Thursday 05 March 2026 00:33:05 +0000 (0:00:09.310) 0:07:46.379 ******** 2026-03-05 00:33:36.864638 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:33:36.864655 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:33:36.864667 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:33:36.864678 | orchestrator | ok: [testbed-manager] 2026-03-05 00:33:36.864690 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:33:36.864701 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:33:36.864711 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:33:36.864723 | orchestrator | 2026-03-05 00:33:36.864735 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-05 00:33:36.864746 | orchestrator | Thursday 05 March 2026 00:33:07 +0000 (0:00:02.024) 0:07:48.404 ******** 2026-03-05 00:33:36.864757 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:33:36.864768 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:33:36.864779 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:33:36.864790 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:33:36.864801 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:33:36.864812 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:33:36.864823 | orchestrator | 2026-03-05 00:33:36.864834 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-05 00:33:36.864846 | orchestrator | Thursday 05 March 2026 00:33:08 +0000 (0:00:01.298) 0:07:49.702 ******** 2026-03-05 00:33:36.864857 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:33:36.864869 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:33:36.864880 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:33:36.864891 | orchestrator | changed: [testbed-manager] 2026-03-05 00:33:36.864902 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:33:36.864913 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:33:36.864924 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:33:36.864935 | orchestrator | 2026-03-05 00:33:36.864946 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-05 00:33:36.864957 | orchestrator | 2026-03-05 00:33:36.864968 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-05 00:33:36.864979 | orchestrator | Thursday 05 March 2026 00:33:09 +0000 (0:00:01.254) 0:07:50.957 ******** 2026-03-05 00:33:36.864991 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:33:36.865026 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:33:36.865041 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:33:36.865054 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:33:36.865067 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:33:36.865079 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:33:36.865092 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:33:36.865105 | orchestrator | 2026-03-05 00:33:36.865117 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-05 00:33:36.865130 | orchestrator | 2026-03-05 00:33:36.865143 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-05 00:33:36.865156 | orchestrator | Thursday 05 March 2026 00:33:10 +0000 (0:00:00.748) 0:07:51.705 ******** 2026-03-05 00:33:36.865168 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:33:36.865181 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:33:36.865194 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:33:36.865208 | orchestrator | changed: [testbed-manager] 2026-03-05 00:33:36.865221 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:33:36.865234 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:33:36.865246 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:33:36.865259 | orchestrator | 2026-03-05 00:33:36.865272 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-05 00:33:36.865285 | orchestrator | Thursday 05 March 2026 00:33:12 +0000 (0:00:01.402) 0:07:53.108 ******** 2026-03-05 00:33:36.865298 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:33:36.865333 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:33:36.865347 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:33:36.865359 | orchestrator | ok: [testbed-manager] 2026-03-05 00:33:36.865372 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:33:36.865385 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:33:36.865398 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:33:36.865409 | orchestrator | 2026-03-05 00:33:36.865420 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-05 00:33:36.865432 | orchestrator | Thursday 05 March 2026 00:33:13 +0000 (0:00:01.505) 0:07:54.613 ******** 2026-03-05 00:33:36.865443 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:33:36.865468 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:33:36.865480 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:33:36.865491 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:33:36.865502 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:33:36.865513 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:33:36.865523 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:33:36.865534 | orchestrator | 2026-03-05 00:33:36.865545 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-05 00:33:36.865556 | orchestrator | Thursday 05 March 2026 00:33:14 +0000 (0:00:00.763) 0:07:55.376 ******** 2026-03-05 00:33:36.865568 | orchestrator | included: osism.services.smartd for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:33:36.865580 | orchestrator | 2026-03-05 00:33:36.865591 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-05 00:33:36.865602 | orchestrator | Thursday 05 March 2026 00:33:15 +0000 (0:00:00.904) 0:07:56.280 ******** 2026-03-05 00:33:36.865615 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:33:36.865628 | orchestrator | 2026-03-05 00:33:36.865640 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-05 00:33:36.865651 | orchestrator | Thursday 05 March 2026 00:33:16 +0000 (0:00:00.875) 0:07:57.156 ******** 2026-03-05 00:33:36.865662 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:33:36.865673 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:33:36.865684 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:33:36.865695 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:33:36.865714 | orchestrator | changed: [testbed-manager] 2026-03-05 00:33:36.865725 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:33:36.865736 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:33:36.865747 | orchestrator | 2026-03-05 00:33:36.865775 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-05 00:33:36.865787 | orchestrator | Thursday 05 March 2026 00:33:24 +0000 (0:00:08.767) 0:08:05.923 ******** 2026-03-05 00:33:36.865797 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:33:36.865808 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:33:36.865819 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:33:36.865830 | orchestrator | changed: [testbed-manager] 2026-03-05 00:33:36.865840 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:33:36.865851 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:33:36.865862 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:33:36.865873 | orchestrator | 2026-03-05 00:33:36.865884 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-05 00:33:36.865895 | orchestrator | Thursday 05 March 2026 00:33:25 +0000 (0:00:00.932) 0:08:06.856 ******** 2026-03-05 00:33:36.865906 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:33:36.865916 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:33:36.865927 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:33:36.865938 | orchestrator | changed: [testbed-manager] 2026-03-05 00:33:36.865949 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:33:36.865959 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:33:36.865970 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:33:36.865981 | orchestrator | 2026-03-05 00:33:36.865991 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-05 00:33:36.866002 | orchestrator | Thursday 05 March 2026 00:33:27 +0000 (0:00:01.401) 0:08:08.257 ******** 2026-03-05 00:33:36.866013 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:33:36.866082 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:33:36.866093 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:33:36.866104 | orchestrator | changed: [testbed-manager] 2026-03-05 00:33:36.866115 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:33:36.866125 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:33:36.866136 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:33:36.866146 | orchestrator | 2026-03-05 00:33:36.866157 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-05 00:33:36.866168 | orchestrator | Thursday 05 March 2026 00:33:29 +0000 (0:00:02.027) 0:08:10.284 ******** 2026-03-05 00:33:36.866179 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:33:36.866189 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:33:36.866200 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:33:36.866211 | orchestrator | changed: [testbed-manager] 2026-03-05 00:33:36.866221 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:33:36.866232 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:33:36.866242 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:33:36.866253 | orchestrator | 2026-03-05 00:33:36.866264 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-05 00:33:36.866275 | orchestrator | Thursday 05 March 2026 00:33:30 +0000 (0:00:01.247) 0:08:11.532 ******** 2026-03-05 00:33:36.866285 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:33:36.866300 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:33:36.866357 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:33:36.866375 | orchestrator | changed: [testbed-manager] 2026-03-05 00:33:36.866394 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:33:36.866411 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:33:36.866431 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:33:36.866443 | orchestrator | 2026-03-05 00:33:36.866454 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-05 00:33:36.866465 | orchestrator | 2026-03-05 00:33:36.866476 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-05 00:33:36.866487 | orchestrator | Thursday 05 March 2026 00:33:31 +0000 (0:00:01.118) 0:08:12.651 ******** 2026-03-05 00:33:36.866508 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:33:36.866520 | orchestrator | 2026-03-05 00:33:36.866531 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-05 00:33:36.866542 | orchestrator | Thursday 05 March 2026 00:33:32 +0000 (0:00:01.128) 0:08:13.779 ******** 2026-03-05 00:33:36.866553 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:33:36.866571 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:33:36.866582 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:33:36.866593 | orchestrator | ok: [testbed-manager] 2026-03-05 00:33:36.866604 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:33:36.866615 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:33:36.866626 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:33:36.866637 | orchestrator | 2026-03-05 00:33:36.866648 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-05 00:33:36.866660 | orchestrator | Thursday 05 March 2026 00:33:33 +0000 (0:00:00.940) 0:08:14.720 ******** 2026-03-05 00:33:36.866671 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:33:36.866682 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:33:36.866693 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:33:36.866704 | orchestrator | changed: [testbed-manager] 2026-03-05 00:33:36.866715 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:33:36.866726 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:33:36.866737 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:33:36.866748 | orchestrator | 2026-03-05 00:33:36.866759 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-05 00:33:36.866770 | orchestrator | Thursday 05 March 2026 00:33:34 +0000 (0:00:01.216) 0:08:15.937 ******** 2026-03-05 00:33:36.866782 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:33:36.866793 | orchestrator | 2026-03-05 00:33:36.866804 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-05 00:33:36.866815 | orchestrator | Thursday 05 March 2026 00:33:36 +0000 (0:00:01.181) 0:08:17.118 ******** 2026-03-05 00:33:36.866827 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:33:36.866838 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:33:36.866849 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:33:36.866860 | orchestrator | ok: [testbed-manager] 2026-03-05 00:33:36.866871 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:33:36.866882 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:33:36.866893 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:33:36.866903 | orchestrator | 2026-03-05 00:33:36.866924 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-05 00:33:38.659512 | orchestrator | Thursday 05 March 2026 00:33:36 +0000 (0:00:00.828) 0:08:17.947 ******** 2026-03-05 00:33:38.659617 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:33:38.659632 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:33:38.659642 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:33:38.659652 | orchestrator | changed: [testbed-manager] 2026-03-05 00:33:38.659662 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:33:38.659671 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:33:38.659680 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:33:38.659690 | orchestrator | 2026-03-05 00:33:38.659701 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:33:38.659712 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-05 00:33:38.659723 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-05 00:33:38.659733 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-05 00:33:38.659769 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-05 00:33:38.659779 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-05 00:33:38.659788 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-05 00:33:38.659798 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-05 00:33:38.659807 | orchestrator | 2026-03-05 00:33:38.659817 | orchestrator | 2026-03-05 00:33:38.659826 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:33:38.659837 | orchestrator | Thursday 05 March 2026 00:33:38 +0000 (0:00:01.371) 0:08:19.319 ******** 2026-03-05 00:33:38.659846 | orchestrator | =============================================================================== 2026-03-05 00:33:38.659856 | orchestrator | osism.commons.packages : Install required packages --------------------- 84.21s 2026-03-05 00:33:38.659866 | orchestrator | osism.commons.packages : Download required packages -------------------- 35.90s 2026-03-05 00:33:38.659875 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.45s 2026-03-05 00:33:38.659885 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.82s 2026-03-05 00:33:38.659894 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.78s 2026-03-05 00:33:38.659904 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.73s 2026-03-05 00:33:38.659914 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.61s 2026-03-05 00:33:38.659923 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.50s 2026-03-05 00:33:38.659933 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.31s 2026-03-05 00:33:38.659942 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.29s 2026-03-05 00:33:38.659952 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.77s 2026-03-05 00:33:38.659975 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.28s 2026-03-05 00:33:38.659985 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.99s 2026-03-05 00:33:38.659995 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.96s 2026-03-05 00:33:38.660004 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.95s 2026-03-05 00:33:38.660014 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.69s 2026-03-05 00:33:38.660024 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.87s 2026-03-05 00:33:38.660033 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.02s 2026-03-05 00:33:38.660043 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.96s 2026-03-05 00:33:38.660054 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.45s 2026-03-05 00:33:39.013903 | orchestrator | + osism apply fail2ban 2026-03-05 00:33:52.188177 | orchestrator | 2026-03-05 00:33:52 | INFO  | Prepare task for execution of fail2ban. 2026-03-05 00:33:52.310473 | orchestrator | 2026-03-05 00:33:52 | INFO  | Task d657644f-5141-4d8e-86ae-57fe67dd268f (fail2ban) was prepared for execution. 2026-03-05 00:33:52.310578 | orchestrator | 2026-03-05 00:33:52 | INFO  | It takes a moment until task d657644f-5141-4d8e-86ae-57fe67dd268f (fail2ban) has been started and output is visible here. 2026-03-05 00:34:14.918160 | orchestrator | 2026-03-05 00:34:14.918264 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-05 00:34:14.918382 | orchestrator | 2026-03-05 00:34:14.918393 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-05 00:34:14.918401 | orchestrator | Thursday 05 March 2026 00:33:57 +0000 (0:00:00.339) 0:00:00.339 ******** 2026-03-05 00:34:14.918410 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:34:14.918419 | orchestrator | 2026-03-05 00:34:14.918427 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-05 00:34:14.918434 | orchestrator | Thursday 05 March 2026 00:33:58 +0000 (0:00:01.319) 0:00:01.658 ******** 2026-03-05 00:34:14.918442 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:34:14.918451 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:34:14.918458 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:34:14.918465 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:34:14.918473 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:34:14.918480 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:34:14.918487 | orchestrator | changed: [testbed-manager] 2026-03-05 00:34:14.918495 | orchestrator | 2026-03-05 00:34:14.918502 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-05 00:34:14.918510 | orchestrator | Thursday 05 March 2026 00:34:09 +0000 (0:00:11.110) 0:00:12.768 ******** 2026-03-05 00:34:14.918517 | orchestrator | changed: [testbed-manager] 2026-03-05 00:34:14.918525 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:34:14.918532 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:34:14.918539 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:34:14.918546 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:34:14.918554 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:34:14.918561 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:34:14.918568 | orchestrator | 2026-03-05 00:34:14.918576 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-05 00:34:14.918583 | orchestrator | Thursday 05 March 2026 00:34:11 +0000 (0:00:01.517) 0:00:14.286 ******** 2026-03-05 00:34:14.918591 | orchestrator | ok: [testbed-manager] 2026-03-05 00:34:14.918599 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:34:14.918606 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:34:14.918613 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:34:14.918621 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:34:14.918628 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:34:14.918635 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:34:14.918643 | orchestrator | 2026-03-05 00:34:14.918650 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-05 00:34:14.918658 | orchestrator | Thursday 05 March 2026 00:34:12 +0000 (0:00:01.506) 0:00:15.792 ******** 2026-03-05 00:34:14.918665 | orchestrator | changed: [testbed-manager] 2026-03-05 00:34:14.918673 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:34:14.918681 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:34:14.918688 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:34:14.918695 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:34:14.918703 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:34:14.918710 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:34:14.918717 | orchestrator | 2026-03-05 00:34:14.918725 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:34:14.918732 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:34:14.918741 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:34:14.918748 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:34:14.918756 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:34:14.918781 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:34:14.918789 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:34:14.918796 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:34:14.918804 | orchestrator | 2026-03-05 00:34:14.918811 | orchestrator | 2026-03-05 00:34:14.918818 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:34:14.918836 | orchestrator | Thursday 05 March 2026 00:34:14 +0000 (0:00:01.667) 0:00:17.460 ******** 2026-03-05 00:34:14.918844 | orchestrator | =============================================================================== 2026-03-05 00:34:14.918852 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.11s 2026-03-05 00:34:14.918859 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.67s 2026-03-05 00:34:14.918875 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.52s 2026-03-05 00:34:14.918883 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.51s 2026-03-05 00:34:14.918890 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.32s 2026-03-05 00:34:15.319579 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-05 00:34:15.319662 | orchestrator | + osism apply network 2026-03-05 00:34:27.521848 | orchestrator | 2026-03-05 00:34:27 | INFO  | Prepare task for execution of network. 2026-03-05 00:34:27.592753 | orchestrator | 2026-03-05 00:34:27 | INFO  | Task 95dd727f-79aa-47dd-8c09-12b01a10e2cc (network) was prepared for execution. 2026-03-05 00:34:27.592855 | orchestrator | 2026-03-05 00:34:27 | INFO  | It takes a moment until task 95dd727f-79aa-47dd-8c09-12b01a10e2cc (network) has been started and output is visible here. 2026-03-05 00:34:57.777326 | orchestrator | 2026-03-05 00:34:57.777445 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-05 00:34:57.777465 | orchestrator | 2026-03-05 00:34:57.777478 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-05 00:34:57.777490 | orchestrator | Thursday 05 March 2026 00:34:32 +0000 (0:00:00.262) 0:00:00.262 ******** 2026-03-05 00:34:57.777501 | orchestrator | ok: [testbed-manager] 2026-03-05 00:34:57.777521 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:34:57.777541 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:34:57.777560 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:34:57.777578 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:34:57.777597 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:34:57.777616 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:34:57.777636 | orchestrator | 2026-03-05 00:34:57.777654 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-05 00:34:57.777674 | orchestrator | Thursday 05 March 2026 00:34:32 +0000 (0:00:00.715) 0:00:00.977 ******** 2026-03-05 00:34:57.777687 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:34:57.777701 | orchestrator | 2026-03-05 00:34:57.777712 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-05 00:34:57.777723 | orchestrator | Thursday 05 March 2026 00:34:33 +0000 (0:00:01.241) 0:00:02.219 ******** 2026-03-05 00:34:57.777734 | orchestrator | ok: [testbed-manager] 2026-03-05 00:34:57.777745 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:34:57.777756 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:34:57.777767 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:34:57.777804 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:34:57.777818 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:34:57.777831 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:34:57.777849 | orchestrator | 2026-03-05 00:34:57.777868 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-05 00:34:57.777881 | orchestrator | Thursday 05 March 2026 00:34:36 +0000 (0:00:02.103) 0:00:04.323 ******** 2026-03-05 00:34:57.777893 | orchestrator | ok: [testbed-manager] 2026-03-05 00:34:57.777907 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:34:57.777919 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:34:57.777931 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:34:57.777944 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:34:57.777958 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:34:57.777978 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:34:57.777997 | orchestrator | 2026-03-05 00:34:57.778078 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-05 00:34:57.778093 | orchestrator | Thursday 05 March 2026 00:34:37 +0000 (0:00:01.773) 0:00:06.096 ******** 2026-03-05 00:34:57.778107 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-05 00:34:57.778121 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-05 00:34:57.778135 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-05 00:34:57.778148 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-05 00:34:57.778159 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-05 00:34:57.778170 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-05 00:34:57.778181 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-05 00:34:57.778192 | orchestrator | 2026-03-05 00:34:57.778203 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-05 00:34:57.778214 | orchestrator | Thursday 05 March 2026 00:34:38 +0000 (0:00:00.981) 0:00:07.078 ******** 2026-03-05 00:34:57.778225 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-05 00:34:57.778237 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 00:34:57.778276 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-05 00:34:57.778295 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-05 00:34:57.778319 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-05 00:34:57.778341 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-05 00:34:57.778359 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-05 00:34:57.778376 | orchestrator | 2026-03-05 00:34:57.778393 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-05 00:34:57.778411 | orchestrator | Thursday 05 March 2026 00:34:42 +0000 (0:00:03.675) 0:00:10.754 ******** 2026-03-05 00:34:57.778428 | orchestrator | changed: [testbed-manager] 2026-03-05 00:34:57.778445 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:34:57.778462 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:34:57.778480 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:34:57.778498 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:34:57.778516 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:34:57.778534 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:34:57.778553 | orchestrator | 2026-03-05 00:34:57.778571 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-05 00:34:57.778591 | orchestrator | Thursday 05 March 2026 00:34:44 +0000 (0:00:01.649) 0:00:12.403 ******** 2026-03-05 00:34:57.778610 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-05 00:34:57.778628 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 00:34:57.778646 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-05 00:34:57.778688 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-05 00:34:57.778708 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-05 00:34:57.778727 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-05 00:34:57.778745 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-05 00:34:57.778765 | orchestrator | 2026-03-05 00:34:57.778783 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-05 00:34:57.778802 | orchestrator | Thursday 05 March 2026 00:34:46 +0000 (0:00:01.901) 0:00:14.305 ******** 2026-03-05 00:34:57.778839 | orchestrator | ok: [testbed-manager] 2026-03-05 00:34:57.778857 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:34:57.778876 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:34:57.778887 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:34:57.778898 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:34:57.778909 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:34:57.778920 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:34:57.778931 | orchestrator | 2026-03-05 00:34:57.778942 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-05 00:34:57.778981 | orchestrator | Thursday 05 March 2026 00:34:47 +0000 (0:00:01.159) 0:00:15.464 ******** 2026-03-05 00:34:57.779000 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:34:57.779019 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:34:57.779036 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:34:57.779053 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:34:57.779072 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:34:57.779090 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:34:57.779109 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:34:57.779128 | orchestrator | 2026-03-05 00:34:57.779145 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-05 00:34:57.779163 | orchestrator | Thursday 05 March 2026 00:34:47 +0000 (0:00:00.712) 0:00:16.176 ******** 2026-03-05 00:34:57.779180 | orchestrator | ok: [testbed-manager] 2026-03-05 00:34:57.779198 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:34:57.779214 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:34:57.779232 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:34:57.779279 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:34:57.779297 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:34:57.779315 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:34:57.779332 | orchestrator | 2026-03-05 00:34:57.779352 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-05 00:34:57.779371 | orchestrator | Thursday 05 March 2026 00:34:50 +0000 (0:00:02.385) 0:00:18.562 ******** 2026-03-05 00:34:57.779390 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:34:57.779409 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:34:57.779422 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:34:57.779433 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:34:57.779444 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:34:57.779455 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:34:57.779467 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-05 00:34:57.779479 | orchestrator | 2026-03-05 00:34:57.779490 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-05 00:34:57.779501 | orchestrator | Thursday 05 March 2026 00:34:51 +0000 (0:00:00.966) 0:00:19.529 ******** 2026-03-05 00:34:57.779512 | orchestrator | ok: [testbed-manager] 2026-03-05 00:34:57.779522 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:34:57.779533 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:34:57.779544 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:34:57.779555 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:34:57.779565 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:34:57.779576 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:34:57.779588 | orchestrator | 2026-03-05 00:34:57.779606 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-05 00:34:57.779633 | orchestrator | Thursday 05 March 2026 00:34:53 +0000 (0:00:01.789) 0:00:21.319 ******** 2026-03-05 00:34:57.779654 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:34:57.779674 | orchestrator | 2026-03-05 00:34:57.779691 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-05 00:34:57.779725 | orchestrator | Thursday 05 March 2026 00:34:54 +0000 (0:00:01.336) 0:00:22.655 ******** 2026-03-05 00:34:57.779743 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:34:57.779763 | orchestrator | ok: [testbed-manager] 2026-03-05 00:34:57.779782 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:34:57.779800 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:34:57.779819 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:34:57.779838 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:34:57.779856 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:34:57.779873 | orchestrator | 2026-03-05 00:34:57.779884 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-05 00:34:57.779895 | orchestrator | Thursday 05 March 2026 00:34:55 +0000 (0:00:01.210) 0:00:23.865 ******** 2026-03-05 00:34:57.779906 | orchestrator | ok: [testbed-manager] 2026-03-05 00:34:57.779927 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:34:57.779938 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:34:57.779949 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:34:57.779960 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:34:57.779970 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:34:57.779981 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:34:57.779992 | orchestrator | 2026-03-05 00:34:57.780003 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-05 00:34:57.780014 | orchestrator | Thursday 05 March 2026 00:34:56 +0000 (0:00:00.681) 0:00:24.547 ******** 2026-03-05 00:34:57.780025 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-05 00:34:57.780037 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-05 00:34:57.780048 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-05 00:34:57.780059 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-05 00:34:57.780070 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-05 00:34:57.780080 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-05 00:34:57.780091 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-05 00:34:57.780102 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-05 00:34:57.780113 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-05 00:34:57.780124 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-05 00:34:57.780134 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-05 00:34:57.780145 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-05 00:34:57.780156 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-05 00:34:57.780168 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-05 00:34:57.780187 | orchestrator | 2026-03-05 00:34:57.780214 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-05 00:35:14.972311 | orchestrator | Thursday 05 March 2026 00:34:57 +0000 (0:00:01.467) 0:00:26.014 ******** 2026-03-05 00:35:14.972437 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:35:14.972456 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:35:14.972468 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:35:14.972479 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:35:14.972490 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:35:14.972501 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:35:14.972512 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:35:14.972523 | orchestrator | 2026-03-05 00:35:14.972536 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-05 00:35:14.972548 | orchestrator | Thursday 05 March 2026 00:34:58 +0000 (0:00:00.746) 0:00:26.761 ******** 2026-03-05 00:35:14.972560 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2026-03-05 00:35:14.972598 | orchestrator | 2026-03-05 00:35:14.972610 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-05 00:35:14.972622 | orchestrator | Thursday 05 March 2026 00:35:03 +0000 (0:00:04.818) 0:00:31.579 ******** 2026-03-05 00:35:14.972635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:35:14.972647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:35:14.972659 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:35:14.972736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:35:14.972750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:35:14.972761 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:35:14.972783 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:35:14.972797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:35:14.972811 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:35:14.972823 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:35:14.972838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:35:14.972870 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:35:14.972885 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:35:14.972907 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:35:14.972920 | orchestrator | 2026-03-05 00:35:14.972933 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-05 00:35:14.972947 | orchestrator | Thursday 05 March 2026 00:35:09 +0000 (0:00:05.989) 0:00:37.568 ******** 2026-03-05 00:35:14.972960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:35:14.972978 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:35:14.973000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:35:14.973013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:35:14.973026 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:35:14.973039 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:35:14.973052 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:35:14.973073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:35:14.973087 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:35:14.973100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:35:14.973113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:35:14.973126 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:35:14.973160 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:35:29.105016 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:35:29.105110 | orchestrator | 2026-03-05 00:35:29.105119 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-05 00:35:29.105126 | orchestrator | Thursday 05 March 2026 00:35:15 +0000 (0:00:05.870) 0:00:43.439 ******** 2026-03-05 00:35:29.105132 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:35:29.105137 | orchestrator | 2026-03-05 00:35:29.105143 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-05 00:35:29.105147 | orchestrator | Thursday 05 March 2026 00:35:16 +0000 (0:00:01.423) 0:00:44.863 ******** 2026-03-05 00:35:29.105153 | orchestrator | ok: [testbed-manager] 2026-03-05 00:35:29.105159 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:35:29.105164 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:35:29.105168 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:35:29.105228 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:35:29.105239 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:35:29.105247 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:35:29.105253 | orchestrator | 2026-03-05 00:35:29.105258 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-05 00:35:29.105263 | orchestrator | Thursday 05 March 2026 00:35:17 +0000 (0:00:01.177) 0:00:46.041 ******** 2026-03-05 00:35:29.105268 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-05 00:35:29.105284 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-05 00:35:29.105296 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-05 00:35:29.105301 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-05 00:35:29.105305 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-05 00:35:29.105310 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-05 00:35:29.105315 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-05 00:35:29.105320 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-05 00:35:29.105324 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:35:29.105331 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-05 00:35:29.105335 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-05 00:35:29.105340 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-05 00:35:29.105353 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-05 00:35:29.105357 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:35:29.105375 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-05 00:35:29.105380 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-05 00:35:29.105385 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-05 00:35:29.105407 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:35:29.105413 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-05 00:35:29.105417 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-05 00:35:29.105422 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-05 00:35:29.105426 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-05 00:35:29.105431 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-05 00:35:29.105436 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:35:29.105440 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-05 00:35:29.105445 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-05 00:35:29.105450 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-05 00:35:29.105454 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-05 00:35:29.105458 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:35:29.105463 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:35:29.105468 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-05 00:35:29.105472 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-05 00:35:29.105477 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-05 00:35:29.105481 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-05 00:35:29.105486 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:35:29.105490 | orchestrator | 2026-03-05 00:35:29.105495 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-03-05 00:35:29.105512 | orchestrator | Thursday 05 March 2026 00:35:18 +0000 (0:00:01.066) 0:00:47.107 ******** 2026-03-05 00:35:29.105518 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:35:29.105523 | orchestrator | 2026-03-05 00:35:29.105527 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-03-05 00:35:29.105532 | orchestrator | Thursday 05 March 2026 00:35:20 +0000 (0:00:01.396) 0:00:48.504 ******** 2026-03-05 00:35:29.105536 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:35:29.105541 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:35:29.105545 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:35:29.105550 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:35:29.105554 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:35:29.105559 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:35:29.105563 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:35:29.105568 | orchestrator | 2026-03-05 00:35:29.105575 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-03-05 00:35:29.105583 | orchestrator | Thursday 05 March 2026 00:35:20 +0000 (0:00:00.669) 0:00:49.173 ******** 2026-03-05 00:35:29.105591 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:35:29.105599 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:35:29.105605 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:35:29.105611 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:35:29.105618 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:35:29.105624 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:35:29.105631 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:35:29.105639 | orchestrator | 2026-03-05 00:35:29.105646 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-03-05 00:35:29.105653 | orchestrator | Thursday 05 March 2026 00:35:21 +0000 (0:00:00.858) 0:00:50.031 ******** 2026-03-05 00:35:29.105659 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:35:29.105673 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:35:29.105781 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:35:29.105794 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:35:29.105803 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:35:29.105810 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:35:29.105818 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:35:29.105825 | orchestrator | 2026-03-05 00:35:29.105831 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-03-05 00:35:29.105837 | orchestrator | Thursday 05 March 2026 00:35:22 +0000 (0:00:00.667) 0:00:50.698 ******** 2026-03-05 00:35:29.105842 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:35:29.105847 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:35:29.105851 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:35:29.105856 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:35:29.105860 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:35:29.105865 | orchestrator | ok: [testbed-manager] 2026-03-05 00:35:29.105870 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:35:29.105874 | orchestrator | 2026-03-05 00:35:29.105879 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-03-05 00:35:29.105884 | orchestrator | Thursday 05 March 2026 00:35:24 +0000 (0:00:01.701) 0:00:52.400 ******** 2026-03-05 00:35:29.105888 | orchestrator | ok: [testbed-manager] 2026-03-05 00:35:29.105893 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:35:29.105897 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:35:29.105902 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:35:29.105906 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:35:29.105911 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:35:29.105915 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:35:29.105920 | orchestrator | 2026-03-05 00:35:29.105924 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-03-05 00:35:29.105936 | orchestrator | Thursday 05 March 2026 00:35:25 +0000 (0:00:00.996) 0:00:53.397 ******** 2026-03-05 00:35:29.105940 | orchestrator | ok: [testbed-manager] 2026-03-05 00:35:29.105945 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:35:29.105950 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:35:29.105954 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:35:29.105958 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:35:29.105963 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:35:29.105967 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:35:29.105972 | orchestrator | 2026-03-05 00:35:29.105977 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-05 00:35:29.105981 | orchestrator | Thursday 05 March 2026 00:35:27 +0000 (0:00:02.489) 0:00:55.887 ******** 2026-03-05 00:35:29.105986 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:35:29.105990 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:35:29.105995 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:35:29.106000 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:35:29.106004 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:35:29.106009 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:35:29.106013 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:35:29.106063 | orchestrator | 2026-03-05 00:35:29.106068 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-05 00:35:29.106073 | orchestrator | Thursday 05 March 2026 00:35:28 +0000 (0:00:00.874) 0:00:56.761 ******** 2026-03-05 00:35:29.106077 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:35:29.106082 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:35:29.106087 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:35:29.106091 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:35:29.106096 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:35:29.106100 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:35:29.106105 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:35:29.106109 | orchestrator | 2026-03-05 00:35:29.106114 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:35:29.106120 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-05 00:35:29.106134 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-05 00:35:29.106148 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-05 00:35:29.540837 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-05 00:35:29.540944 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-05 00:35:29.540959 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-05 00:35:29.540971 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-05 00:35:29.540983 | orchestrator | 2026-03-05 00:35:29.540994 | orchestrator | 2026-03-05 00:35:29.541005 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:35:29.541018 | orchestrator | Thursday 05 March 2026 00:35:29 +0000 (0:00:00.586) 0:00:57.348 ******** 2026-03-05 00:35:29.541029 | orchestrator | =============================================================================== 2026-03-05 00:35:29.541040 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.99s 2026-03-05 00:35:29.541051 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.87s 2026-03-05 00:35:29.541061 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.82s 2026-03-05 00:35:29.541072 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.68s 2026-03-05 00:35:29.541083 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.49s 2026-03-05 00:35:29.541093 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.39s 2026-03-05 00:35:29.541109 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.10s 2026-03-05 00:35:29.541128 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.90s 2026-03-05 00:35:29.541145 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.79s 2026-03-05 00:35:29.541164 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.77s 2026-03-05 00:35:29.541246 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.70s 2026-03-05 00:35:29.541265 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.65s 2026-03-05 00:35:29.541283 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.47s 2026-03-05 00:35:29.541300 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.42s 2026-03-05 00:35:29.541318 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.40s 2026-03-05 00:35:29.541336 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.34s 2026-03-05 00:35:29.541354 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.24s 2026-03-05 00:35:29.541370 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.21s 2026-03-05 00:35:29.541444 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.18s 2026-03-05 00:35:29.541460 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.16s 2026-03-05 00:35:29.894451 | orchestrator | + osism apply wireguard 2026-03-05 00:35:42.095961 | orchestrator | 2026-03-05 00:35:42 | INFO  | Prepare task for execution of wireguard. 2026-03-05 00:35:42.193542 | orchestrator | 2026-03-05 00:35:42 | INFO  | Task 5e7157b6-8fec-482e-9b7f-816a3dc5509a (wireguard) was prepared for execution. 2026-03-05 00:35:42.193648 | orchestrator | 2026-03-05 00:35:42 | INFO  | It takes a moment until task 5e7157b6-8fec-482e-9b7f-816a3dc5509a (wireguard) has been started and output is visible here. 2026-03-05 00:36:03.048563 | orchestrator | 2026-03-05 00:36:03.048706 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-05 00:36:03.048731 | orchestrator | 2026-03-05 00:36:03.048753 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-05 00:36:03.048813 | orchestrator | Thursday 05 March 2026 00:35:46 +0000 (0:00:00.256) 0:00:00.256 ******** 2026-03-05 00:36:03.048833 | orchestrator | ok: [testbed-manager] 2026-03-05 00:36:03.048853 | orchestrator | 2026-03-05 00:36:03.048873 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-05 00:36:03.048893 | orchestrator | Thursday 05 March 2026 00:35:48 +0000 (0:00:01.612) 0:00:01.868 ******** 2026-03-05 00:36:03.048911 | orchestrator | changed: [testbed-manager] 2026-03-05 00:36:03.048930 | orchestrator | 2026-03-05 00:36:03.048942 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-05 00:36:03.048953 | orchestrator | Thursday 05 March 2026 00:35:55 +0000 (0:00:06.854) 0:00:08.723 ******** 2026-03-05 00:36:03.048964 | orchestrator | changed: [testbed-manager] 2026-03-05 00:36:03.048975 | orchestrator | 2026-03-05 00:36:03.048986 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-05 00:36:03.048997 | orchestrator | Thursday 05 March 2026 00:35:55 +0000 (0:00:00.554) 0:00:09.278 ******** 2026-03-05 00:36:03.049008 | orchestrator | changed: [testbed-manager] 2026-03-05 00:36:03.049019 | orchestrator | 2026-03-05 00:36:03.049029 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-05 00:36:03.049040 | orchestrator | Thursday 05 March 2026 00:35:56 +0000 (0:00:00.502) 0:00:09.780 ******** 2026-03-05 00:36:03.049051 | orchestrator | ok: [testbed-manager] 2026-03-05 00:36:03.049062 | orchestrator | 2026-03-05 00:36:03.049073 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-05 00:36:03.049084 | orchestrator | Thursday 05 March 2026 00:35:56 +0000 (0:00:00.699) 0:00:10.480 ******** 2026-03-05 00:36:03.049095 | orchestrator | ok: [testbed-manager] 2026-03-05 00:36:03.049108 | orchestrator | 2026-03-05 00:36:03.049202 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-05 00:36:03.049219 | orchestrator | Thursday 05 March 2026 00:35:57 +0000 (0:00:00.405) 0:00:10.885 ******** 2026-03-05 00:36:03.049233 | orchestrator | ok: [testbed-manager] 2026-03-05 00:36:03.049246 | orchestrator | 2026-03-05 00:36:03.049259 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-05 00:36:03.049294 | orchestrator | Thursday 05 March 2026 00:35:57 +0000 (0:00:00.434) 0:00:11.320 ******** 2026-03-05 00:36:03.049308 | orchestrator | changed: [testbed-manager] 2026-03-05 00:36:03.049321 | orchestrator | 2026-03-05 00:36:03.049334 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-05 00:36:03.049348 | orchestrator | Thursday 05 March 2026 00:35:58 +0000 (0:00:01.200) 0:00:12.520 ******** 2026-03-05 00:36:03.049362 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-05 00:36:03.049375 | orchestrator | changed: [testbed-manager] 2026-03-05 00:36:03.049388 | orchestrator | 2026-03-05 00:36:03.049401 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-05 00:36:03.049415 | orchestrator | Thursday 05 March 2026 00:35:59 +0000 (0:00:00.973) 0:00:13.493 ******** 2026-03-05 00:36:03.049428 | orchestrator | changed: [testbed-manager] 2026-03-05 00:36:03.049442 | orchestrator | 2026-03-05 00:36:03.049460 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-05 00:36:03.049478 | orchestrator | Thursday 05 March 2026 00:36:01 +0000 (0:00:01.768) 0:00:15.262 ******** 2026-03-05 00:36:03.049494 | orchestrator | changed: [testbed-manager] 2026-03-05 00:36:03.049512 | orchestrator | 2026-03-05 00:36:03.049530 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:36:03.049607 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:36:03.049632 | orchestrator | 2026-03-05 00:36:03.049652 | orchestrator | 2026-03-05 00:36:03.049672 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:36:03.049691 | orchestrator | Thursday 05 March 2026 00:36:02 +0000 (0:00:00.963) 0:00:16.225 ******** 2026-03-05 00:36:03.049710 | orchestrator | =============================================================================== 2026-03-05 00:36:03.049729 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.85s 2026-03-05 00:36:03.049747 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.77s 2026-03-05 00:36:03.049765 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.61s 2026-03-05 00:36:03.049785 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.20s 2026-03-05 00:36:03.049803 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.97s 2026-03-05 00:36:03.049821 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.96s 2026-03-05 00:36:03.049833 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.70s 2026-03-05 00:36:03.049844 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2026-03-05 00:36:03.049854 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.50s 2026-03-05 00:36:03.049872 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2026-03-05 00:36:03.049884 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.41s 2026-03-05 00:36:03.400140 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-05 00:36:03.438939 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-05 00:36:03.439011 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-05 00:36:03.519555 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 174 0 --:--:-- --:--:-- --:--:-- 172 2026-03-05 00:36:03.536948 | orchestrator | + osism apply --environment custom workarounds 2026-03-05 00:36:05.600316 | orchestrator | 2026-03-05 00:36:05 | INFO  | Trying to run play workarounds in environment custom 2026-03-05 00:36:15.685230 | orchestrator | 2026-03-05 00:36:15 | INFO  | Prepare task for execution of workarounds. 2026-03-05 00:36:15.776843 | orchestrator | 2026-03-05 00:36:15 | INFO  | Task d55f3895-e87b-4be2-8289-ad23d5ecaf55 (workarounds) was prepared for execution. 2026-03-05 00:36:15.776961 | orchestrator | 2026-03-05 00:36:15 | INFO  | It takes a moment until task d55f3895-e87b-4be2-8289-ad23d5ecaf55 (workarounds) has been started and output is visible here. 2026-03-05 00:36:41.878722 | orchestrator | 2026-03-05 00:36:41.878839 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 00:36:41.878856 | orchestrator | 2026-03-05 00:36:41.878869 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-05 00:36:41.878881 | orchestrator | Thursday 05 March 2026 00:36:20 +0000 (0:00:00.148) 0:00:00.148 ******** 2026-03-05 00:36:41.878893 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-05 00:36:41.878905 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-05 00:36:41.878916 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-05 00:36:41.878927 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-05 00:36:41.878937 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-05 00:36:41.878949 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-05 00:36:41.878980 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-05 00:36:41.878991 | orchestrator | 2026-03-05 00:36:41.879003 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-05 00:36:41.879013 | orchestrator | 2026-03-05 00:36:41.879024 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-05 00:36:41.879035 | orchestrator | Thursday 05 March 2026 00:36:21 +0000 (0:00:00.863) 0:00:01.012 ******** 2026-03-05 00:36:41.879046 | orchestrator | ok: [testbed-manager] 2026-03-05 00:36:41.879058 | orchestrator | 2026-03-05 00:36:41.879069 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-05 00:36:41.879121 | orchestrator | 2026-03-05 00:36:41.879144 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-05 00:36:41.879155 | orchestrator | Thursday 05 March 2026 00:36:23 +0000 (0:00:02.636) 0:00:03.648 ******** 2026-03-05 00:36:41.879166 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:36:41.879177 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:36:41.879188 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:36:41.879198 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:36:41.879209 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:36:41.879219 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:36:41.879230 | orchestrator | 2026-03-05 00:36:41.879243 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-05 00:36:41.879256 | orchestrator | 2026-03-05 00:36:41.879268 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-05 00:36:41.879281 | orchestrator | Thursday 05 March 2026 00:36:25 +0000 (0:00:01.738) 0:00:05.386 ******** 2026-03-05 00:36:41.879294 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-05 00:36:41.879309 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-05 00:36:41.879321 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-05 00:36:41.879334 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-05 00:36:41.879347 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-05 00:36:41.879359 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-05 00:36:41.879371 | orchestrator | 2026-03-05 00:36:41.879385 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-05 00:36:41.879398 | orchestrator | Thursday 05 March 2026 00:36:26 +0000 (0:00:01.468) 0:00:06.855 ******** 2026-03-05 00:36:41.879411 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:36:41.879424 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:36:41.879436 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:36:41.879449 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:36:41.879461 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:36:41.879474 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:36:41.879486 | orchestrator | 2026-03-05 00:36:41.879499 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-05 00:36:41.879519 | orchestrator | Thursday 05 March 2026 00:36:30 +0000 (0:00:03.854) 0:00:10.710 ******** 2026-03-05 00:36:41.879531 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:36:41.879545 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:36:41.879558 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:36:41.879570 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:36:41.879581 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:36:41.879592 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:36:41.879603 | orchestrator | 2026-03-05 00:36:41.879614 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-05 00:36:41.879625 | orchestrator | 2026-03-05 00:36:41.879636 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-05 00:36:41.879655 | orchestrator | Thursday 05 March 2026 00:36:31 +0000 (0:00:00.754) 0:00:11.464 ******** 2026-03-05 00:36:41.879666 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:36:41.879676 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:36:41.879687 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:36:41.879698 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:36:41.879709 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:36:41.879720 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:36:41.879730 | orchestrator | changed: [testbed-manager] 2026-03-05 00:36:41.879741 | orchestrator | 2026-03-05 00:36:41.879752 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-05 00:36:41.879763 | orchestrator | Thursday 05 March 2026 00:36:33 +0000 (0:00:01.636) 0:00:13.101 ******** 2026-03-05 00:36:41.879774 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:36:41.879785 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:36:41.879796 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:36:41.879806 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:36:41.879817 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:36:41.879828 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:36:41.879856 | orchestrator | changed: [testbed-manager] 2026-03-05 00:36:41.879867 | orchestrator | 2026-03-05 00:36:41.879878 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-05 00:36:41.879889 | orchestrator | Thursday 05 March 2026 00:36:34 +0000 (0:00:01.644) 0:00:14.745 ******** 2026-03-05 00:36:41.879900 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:36:41.879911 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:36:41.879922 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:36:41.879933 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:36:41.879943 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:36:41.879954 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:36:41.879965 | orchestrator | ok: [testbed-manager] 2026-03-05 00:36:41.879975 | orchestrator | 2026-03-05 00:36:41.879986 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-05 00:36:41.879997 | orchestrator | Thursday 05 March 2026 00:36:36 +0000 (0:00:01.725) 0:00:16.471 ******** 2026-03-05 00:36:41.880008 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:36:41.880019 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:36:41.880030 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:36:41.880040 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:36:41.880051 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:36:41.880062 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:36:41.880073 | orchestrator | changed: [testbed-manager] 2026-03-05 00:36:41.880115 | orchestrator | 2026-03-05 00:36:41.880133 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-05 00:36:41.880152 | orchestrator | Thursday 05 March 2026 00:36:38 +0000 (0:00:01.862) 0:00:18.334 ******** 2026-03-05 00:36:41.880164 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:36:41.880174 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:36:41.880186 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:36:41.880204 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:36:41.880222 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:36:41.880239 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:36:41.880266 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:36:41.880286 | orchestrator | 2026-03-05 00:36:41.880304 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-05 00:36:41.880321 | orchestrator | 2026-03-05 00:36:41.880338 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-05 00:36:41.880355 | orchestrator | Thursday 05 March 2026 00:36:38 +0000 (0:00:00.652) 0:00:18.986 ******** 2026-03-05 00:36:41.880373 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:36:41.880390 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:36:41.880409 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:36:41.880427 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:36:41.880459 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:36:41.880474 | orchestrator | ok: [testbed-manager] 2026-03-05 00:36:41.880485 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:36:41.880496 | orchestrator | 2026-03-05 00:36:41.880507 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:36:41.880519 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:36:41.880532 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:36:41.880543 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:36:41.880554 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:36:41.880565 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:36:41.880576 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:36:41.880594 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:36:41.880605 | orchestrator | 2026-03-05 00:36:41.880617 | orchestrator | 2026-03-05 00:36:41.880628 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:36:41.880639 | orchestrator | Thursday 05 March 2026 00:36:41 +0000 (0:00:02.869) 0:00:21.855 ******** 2026-03-05 00:36:41.880649 | orchestrator | =============================================================================== 2026-03-05 00:36:41.880660 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.85s 2026-03-05 00:36:41.880671 | orchestrator | Install python3-docker -------------------------------------------------- 2.87s 2026-03-05 00:36:41.880682 | orchestrator | Apply netplan configuration --------------------------------------------- 2.64s 2026-03-05 00:36:41.880693 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.86s 2026-03-05 00:36:41.880704 | orchestrator | Apply netplan configuration --------------------------------------------- 1.74s 2026-03-05 00:36:41.880714 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.73s 2026-03-05 00:36:41.880725 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.64s 2026-03-05 00:36:41.880736 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.64s 2026-03-05 00:36:41.880746 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.47s 2026-03-05 00:36:41.880757 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.86s 2026-03-05 00:36:41.880768 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.75s 2026-03-05 00:36:41.880789 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.65s 2026-03-05 00:36:42.532883 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-05 00:36:54.660290 | orchestrator | 2026-03-05 00:36:54 | INFO  | Prepare task for execution of reboot. 2026-03-05 00:36:54.733963 | orchestrator | 2026-03-05 00:36:54 | INFO  | Task 1d8a2621-d548-4b5e-bea2-2fe5dd3983ca (reboot) was prepared for execution. 2026-03-05 00:36:54.734217 | orchestrator | 2026-03-05 00:36:54 | INFO  | It takes a moment until task 1d8a2621-d548-4b5e-bea2-2fe5dd3983ca (reboot) has been started and output is visible here. 2026-03-05 00:37:05.236979 | orchestrator | 2026-03-05 00:37:05.237104 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-05 00:37:05.237129 | orchestrator | 2026-03-05 00:37:05.237134 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-05 00:37:05.237139 | orchestrator | Thursday 05 March 2026 00:36:59 +0000 (0:00:00.224) 0:00:00.224 ******** 2026-03-05 00:37:05.237143 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:37:05.237149 | orchestrator | 2026-03-05 00:37:05.237153 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-05 00:37:05.237157 | orchestrator | Thursday 05 March 2026 00:36:59 +0000 (0:00:00.109) 0:00:00.334 ******** 2026-03-05 00:37:05.237161 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:37:05.237165 | orchestrator | 2026-03-05 00:37:05.237169 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-05 00:37:05.237173 | orchestrator | Thursday 05 March 2026 00:37:00 +0000 (0:00:00.981) 0:00:01.315 ******** 2026-03-05 00:37:05.237177 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:37:05.237181 | orchestrator | 2026-03-05 00:37:05.237185 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-05 00:37:05.237189 | orchestrator | 2026-03-05 00:37:05.237193 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-05 00:37:05.237196 | orchestrator | Thursday 05 March 2026 00:37:00 +0000 (0:00:00.126) 0:00:01.442 ******** 2026-03-05 00:37:05.237200 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:37:05.237204 | orchestrator | 2026-03-05 00:37:05.237208 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-05 00:37:05.237212 | orchestrator | Thursday 05 March 2026 00:37:00 +0000 (0:00:00.114) 0:00:01.556 ******** 2026-03-05 00:37:05.237216 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:37:05.237220 | orchestrator | 2026-03-05 00:37:05.237223 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-05 00:37:05.237228 | orchestrator | Thursday 05 March 2026 00:37:01 +0000 (0:00:00.682) 0:00:02.239 ******** 2026-03-05 00:37:05.237232 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:37:05.237236 | orchestrator | 2026-03-05 00:37:05.237239 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-05 00:37:05.237243 | orchestrator | 2026-03-05 00:37:05.237247 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-05 00:37:05.237251 | orchestrator | Thursday 05 March 2026 00:37:01 +0000 (0:00:00.123) 0:00:02.362 ******** 2026-03-05 00:37:05.237255 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:37:05.237259 | orchestrator | 2026-03-05 00:37:05.237262 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-05 00:37:05.237266 | orchestrator | Thursday 05 March 2026 00:37:01 +0000 (0:00:00.198) 0:00:02.561 ******** 2026-03-05 00:37:05.237271 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:37:05.237274 | orchestrator | 2026-03-05 00:37:05.237278 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-05 00:37:05.237282 | orchestrator | Thursday 05 March 2026 00:37:02 +0000 (0:00:00.694) 0:00:03.255 ******** 2026-03-05 00:37:05.237286 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:37:05.237290 | orchestrator | 2026-03-05 00:37:05.237294 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-05 00:37:05.237297 | orchestrator | 2026-03-05 00:37:05.237301 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-05 00:37:05.237305 | orchestrator | Thursday 05 March 2026 00:37:02 +0000 (0:00:00.131) 0:00:03.387 ******** 2026-03-05 00:37:05.237309 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:37:05.237313 | orchestrator | 2026-03-05 00:37:05.237327 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-05 00:37:05.237331 | orchestrator | Thursday 05 March 2026 00:37:02 +0000 (0:00:00.101) 0:00:03.488 ******** 2026-03-05 00:37:05.237335 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:37:05.237338 | orchestrator | 2026-03-05 00:37:05.237342 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-05 00:37:05.237346 | orchestrator | Thursday 05 March 2026 00:37:03 +0000 (0:00:00.712) 0:00:04.201 ******** 2026-03-05 00:37:05.237354 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:37:05.237358 | orchestrator | 2026-03-05 00:37:05.237362 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-05 00:37:05.237366 | orchestrator | 2026-03-05 00:37:05.237370 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-05 00:37:05.237373 | orchestrator | Thursday 05 March 2026 00:37:03 +0000 (0:00:00.105) 0:00:04.307 ******** 2026-03-05 00:37:05.237377 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:37:05.237381 | orchestrator | 2026-03-05 00:37:05.237385 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-05 00:37:05.237389 | orchestrator | Thursday 05 March 2026 00:37:03 +0000 (0:00:00.106) 0:00:04.413 ******** 2026-03-05 00:37:05.237393 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:37:05.237397 | orchestrator | 2026-03-05 00:37:05.237400 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-05 00:37:05.237404 | orchestrator | Thursday 05 March 2026 00:37:03 +0000 (0:00:00.672) 0:00:05.085 ******** 2026-03-05 00:37:05.237408 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:37:05.237412 | orchestrator | 2026-03-05 00:37:05.237416 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-05 00:37:05.237420 | orchestrator | 2026-03-05 00:37:05.237424 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-05 00:37:05.237427 | orchestrator | Thursday 05 March 2026 00:37:04 +0000 (0:00:00.104) 0:00:05.190 ******** 2026-03-05 00:37:05.237431 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:37:05.237435 | orchestrator | 2026-03-05 00:37:05.237439 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-05 00:37:05.237443 | orchestrator | Thursday 05 March 2026 00:37:04 +0000 (0:00:00.118) 0:00:05.309 ******** 2026-03-05 00:37:05.237447 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:37:05.237450 | orchestrator | 2026-03-05 00:37:05.237454 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-05 00:37:05.237458 | orchestrator | Thursday 05 March 2026 00:37:04 +0000 (0:00:00.670) 0:00:05.979 ******** 2026-03-05 00:37:05.237473 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:37:05.237477 | orchestrator | 2026-03-05 00:37:05.237481 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:37:05.237486 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:37:05.237491 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:37:05.237495 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:37:05.237498 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:37:05.237502 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:37:05.237506 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:37:05.237510 | orchestrator | 2026-03-05 00:37:05.237514 | orchestrator | 2026-03-05 00:37:05.237518 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:37:05.237522 | orchestrator | Thursday 05 March 2026 00:37:04 +0000 (0:00:00.041) 0:00:06.021 ******** 2026-03-05 00:37:05.237525 | orchestrator | =============================================================================== 2026-03-05 00:37:05.237529 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.41s 2026-03-05 00:37:05.237536 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.75s 2026-03-05 00:37:05.237540 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.63s 2026-03-05 00:37:05.621200 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-05 00:37:17.859734 | orchestrator | 2026-03-05 00:37:17 | INFO  | Prepare task for execution of wait-for-connection. 2026-03-05 00:37:17.935220 | orchestrator | 2026-03-05 00:37:17 | INFO  | Task 49008135-ab88-426e-ad4a-4c4908866645 (wait-for-connection) was prepared for execution. 2026-03-05 00:37:17.935336 | orchestrator | 2026-03-05 00:37:17 | INFO  | It takes a moment until task 49008135-ab88-426e-ad4a-4c4908866645 (wait-for-connection) has been started and output is visible here. 2026-03-05 00:37:34.608154 | orchestrator | 2026-03-05 00:37:34.608281 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-05 00:37:34.608299 | orchestrator | 2026-03-05 00:37:34.608311 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-05 00:37:34.608322 | orchestrator | Thursday 05 March 2026 00:37:22 +0000 (0:00:00.255) 0:00:00.255 ******** 2026-03-05 00:37:34.608333 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:37:34.608346 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:37:34.608376 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:37:34.608388 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:37:34.608399 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:37:34.608410 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:37:34.608421 | orchestrator | 2026-03-05 00:37:34.608432 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:37:34.608444 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:37:34.608456 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:37:34.608467 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:37:34.608478 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:37:34.608489 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:37:34.608500 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:37:34.608511 | orchestrator | 2026-03-05 00:37:34.608522 | orchestrator | 2026-03-05 00:37:34.608533 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:37:34.608545 | orchestrator | Thursday 05 March 2026 00:37:34 +0000 (0:00:11.549) 0:00:11.804 ******** 2026-03-05 00:37:34.608556 | orchestrator | =============================================================================== 2026-03-05 00:37:34.608567 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.55s 2026-03-05 00:37:34.921774 | orchestrator | + osism apply hddtemp 2026-03-05 00:37:47.046129 | orchestrator | 2026-03-05 00:37:47 | INFO  | Prepare task for execution of hddtemp. 2026-03-05 00:37:47.113517 | orchestrator | 2026-03-05 00:37:47 | INFO  | Task 56378f37-609d-42cf-a895-c7244cd4b939 (hddtemp) was prepared for execution. 2026-03-05 00:37:47.113616 | orchestrator | 2026-03-05 00:37:47 | INFO  | It takes a moment until task 56378f37-609d-42cf-a895-c7244cd4b939 (hddtemp) has been started and output is visible here. 2026-03-05 00:38:14.636710 | orchestrator | 2026-03-05 00:38:14.636813 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-05 00:38:14.636826 | orchestrator | 2026-03-05 00:38:14.636835 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-05 00:38:14.636868 | orchestrator | Thursday 05 March 2026 00:37:51 +0000 (0:00:00.333) 0:00:00.333 ******** 2026-03-05 00:38:14.636877 | orchestrator | ok: [testbed-manager] 2026-03-05 00:38:14.636887 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:38:14.636895 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:38:14.636902 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:38:14.636911 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:38:14.636919 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:38:14.636927 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:38:14.636935 | orchestrator | 2026-03-05 00:38:14.636943 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-05 00:38:14.636951 | orchestrator | Thursday 05 March 2026 00:37:52 +0000 (0:00:00.763) 0:00:01.097 ******** 2026-03-05 00:38:14.636961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:38:14.636971 | orchestrator | 2026-03-05 00:38:14.636979 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-05 00:38:14.636987 | orchestrator | Thursday 05 March 2026 00:37:53 +0000 (0:00:01.211) 0:00:02.308 ******** 2026-03-05 00:38:14.636995 | orchestrator | ok: [testbed-manager] 2026-03-05 00:38:14.637003 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:38:14.637070 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:38:14.637079 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:38:14.637087 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:38:14.637095 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:38:14.637102 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:38:14.637110 | orchestrator | 2026-03-05 00:38:14.637118 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-05 00:38:14.637126 | orchestrator | Thursday 05 March 2026 00:37:55 +0000 (0:00:02.169) 0:00:04.477 ******** 2026-03-05 00:38:14.637134 | orchestrator | changed: [testbed-manager] 2026-03-05 00:38:14.637144 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:38:14.637151 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:38:14.637159 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:38:14.637167 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:38:14.637175 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:38:14.637182 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:38:14.637190 | orchestrator | 2026-03-05 00:38:14.637198 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-05 00:38:14.637206 | orchestrator | Thursday 05 March 2026 00:37:57 +0000 (0:00:01.223) 0:00:05.701 ******** 2026-03-05 00:38:14.637214 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:38:14.637221 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:38:14.637229 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:38:14.637239 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:38:14.637247 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:38:14.637256 | orchestrator | ok: [testbed-manager] 2026-03-05 00:38:14.637265 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:38:14.637275 | orchestrator | 2026-03-05 00:38:14.637284 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-05 00:38:14.637293 | orchestrator | Thursday 05 March 2026 00:37:58 +0000 (0:00:01.211) 0:00:06.913 ******** 2026-03-05 00:38:14.637303 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:38:14.637324 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:38:14.637334 | orchestrator | changed: [testbed-manager] 2026-03-05 00:38:14.637342 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:38:14.637351 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:38:14.637361 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:38:14.637369 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:38:14.637378 | orchestrator | 2026-03-05 00:38:14.637388 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-05 00:38:14.637397 | orchestrator | Thursday 05 March 2026 00:37:59 +0000 (0:00:00.821) 0:00:07.734 ******** 2026-03-05 00:38:14.637413 | orchestrator | changed: [testbed-manager] 2026-03-05 00:38:14.637422 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:38:14.637431 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:38:14.637440 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:38:14.637449 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:38:14.637458 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:38:14.637467 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:38:14.637477 | orchestrator | 2026-03-05 00:38:14.637486 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-05 00:38:14.637495 | orchestrator | Thursday 05 March 2026 00:38:11 +0000 (0:00:11.932) 0:00:19.667 ******** 2026-03-05 00:38:14.637506 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:38:14.637521 | orchestrator | 2026-03-05 00:38:14.637534 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-05 00:38:14.637548 | orchestrator | Thursday 05 March 2026 00:38:12 +0000 (0:00:01.345) 0:00:21.012 ******** 2026-03-05 00:38:14.637561 | orchestrator | changed: [testbed-manager] 2026-03-05 00:38:14.637574 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:38:14.637587 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:38:14.637601 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:38:14.637614 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:38:14.637626 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:38:14.637637 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:38:14.637650 | orchestrator | 2026-03-05 00:38:14.637663 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:38:14.637676 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:38:14.637712 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:38:14.637727 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:38:14.637740 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:38:14.637753 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:38:14.637766 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:38:14.637780 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:38:14.637795 | orchestrator | 2026-03-05 00:38:14.637809 | orchestrator | 2026-03-05 00:38:14.637823 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:38:14.637837 | orchestrator | Thursday 05 March 2026 00:38:14 +0000 (0:00:01.898) 0:00:22.911 ******** 2026-03-05 00:38:14.637850 | orchestrator | =============================================================================== 2026-03-05 00:38:14.637864 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 11.93s 2026-03-05 00:38:14.637877 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.17s 2026-03-05 00:38:14.637890 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.90s 2026-03-05 00:38:14.637903 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.35s 2026-03-05 00:38:14.637914 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.22s 2026-03-05 00:38:14.637938 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.21s 2026-03-05 00:38:14.637951 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.21s 2026-03-05 00:38:14.637965 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.82s 2026-03-05 00:38:14.637977 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.76s 2026-03-05 00:38:14.942353 | orchestrator | ++ semver latest 7.1.1 2026-03-05 00:38:15.000443 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-05 00:38:15.000540 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-05 00:38:15.000555 | orchestrator | + sudo systemctl restart manager.service 2026-03-05 00:38:28.465080 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-05 00:38:28.465365 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-05 00:38:28.465397 | orchestrator | + local max_attempts=60 2026-03-05 00:38:28.465416 | orchestrator | + local name=ceph-ansible 2026-03-05 00:38:28.465433 | orchestrator | + local attempt_num=1 2026-03-05 00:38:28.465470 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:38:28.500182 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:38:28.500292 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:38:28.500309 | orchestrator | + sleep 5 2026-03-05 00:38:33.508565 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:38:33.535740 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:38:33.535834 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:38:33.535850 | orchestrator | + sleep 5 2026-03-05 00:38:38.539266 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:38:38.574466 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:38:38.574557 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:38:38.574599 | orchestrator | + sleep 5 2026-03-05 00:38:43.578585 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:38:43.615445 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:38:43.615557 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:38:43.615628 | orchestrator | + sleep 5 2026-03-05 00:38:48.619469 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:38:48.659419 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:38:48.659543 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:38:48.659563 | orchestrator | + sleep 5 2026-03-05 00:38:53.664678 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:38:53.705974 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:38:53.706282 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:38:53.706312 | orchestrator | + sleep 5 2026-03-05 00:38:58.711041 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:38:58.748805 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:38:58.748918 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:38:58.749606 | orchestrator | + sleep 5 2026-03-05 00:39:03.754247 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:39:03.801574 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-05 00:39:03.801680 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:39:03.801696 | orchestrator | + sleep 5 2026-03-05 00:39:08.806322 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:39:08.845368 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-05 00:39:08.845471 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:39:08.845485 | orchestrator | + sleep 5 2026-03-05 00:39:13.848947 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:39:13.891241 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-05 00:39:13.891324 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:39:13.891339 | orchestrator | + sleep 5 2026-03-05 00:39:18.896118 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:39:18.932673 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-05 00:39:18.932775 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:39:18.932790 | orchestrator | + sleep 5 2026-03-05 00:39:23.937262 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:39:23.975781 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-05 00:39:23.975915 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:39:23.975931 | orchestrator | + sleep 5 2026-03-05 00:39:28.980608 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:39:29.022947 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-05 00:39:29.023069 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:39:29.023085 | orchestrator | + sleep 5 2026-03-05 00:39:34.027917 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:39:34.062493 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:39:34.062583 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-05 00:39:34.062608 | orchestrator | + local max_attempts=60 2026-03-05 00:39:34.062633 | orchestrator | + local name=kolla-ansible 2026-03-05 00:39:34.062663 | orchestrator | + local attempt_num=1 2026-03-05 00:39:34.063447 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-05 00:39:34.103171 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:39:34.103265 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-05 00:39:34.103280 | orchestrator | + local max_attempts=60 2026-03-05 00:39:34.103293 | orchestrator | + local name=osism-ansible 2026-03-05 00:39:34.103304 | orchestrator | + local attempt_num=1 2026-03-05 00:39:34.104519 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-05 00:39:34.139746 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:39:34.139868 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-05 00:39:34.139886 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-05 00:39:34.302569 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-05 00:39:34.463918 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-05 00:39:34.635328 | orchestrator | ARA in osism-ansible already disabled. 2026-03-05 00:39:34.792571 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-05 00:39:34.793494 | orchestrator | + osism apply gather-facts 2026-03-05 00:39:46.883842 | orchestrator | 2026-03-05 00:39:46 | INFO  | Prepare task for execution of gather-facts. 2026-03-05 00:39:46.957851 | orchestrator | 2026-03-05 00:39:46 | INFO  | Task eede3d65-60b8-471b-ba0c-c4e44d6edc3c (gather-facts) was prepared for execution. 2026-03-05 00:39:46.957959 | orchestrator | 2026-03-05 00:39:46 | INFO  | It takes a moment until task eede3d65-60b8-471b-ba0c-c4e44d6edc3c (gather-facts) has been started and output is visible here. 2026-03-05 00:39:59.636927 | orchestrator | 2026-03-05 00:39:59.637088 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-05 00:39:59.637120 | orchestrator | 2026-03-05 00:39:59.637175 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-05 00:39:59.637189 | orchestrator | Thursday 05 March 2026 00:39:51 +0000 (0:00:00.219) 0:00:00.219 ******** 2026-03-05 00:39:59.637201 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:39:59.637214 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:39:59.637225 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:39:59.637236 | orchestrator | ok: [testbed-manager] 2026-03-05 00:39:59.637247 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:39:59.637258 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:39:59.637269 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:39:59.637280 | orchestrator | 2026-03-05 00:39:59.637291 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-05 00:39:59.637302 | orchestrator | 2026-03-05 00:39:59.637313 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-05 00:39:59.637324 | orchestrator | Thursday 05 March 2026 00:39:58 +0000 (0:00:07.425) 0:00:07.645 ******** 2026-03-05 00:39:59.637336 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:39:59.637348 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:39:59.637359 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:39:59.637370 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:39:59.637380 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:39:59.637392 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:39:59.637402 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:39:59.637413 | orchestrator | 2026-03-05 00:39:59.637425 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:39:59.637461 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:39:59.637476 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:39:59.637507 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:39:59.637521 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:39:59.637533 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:39:59.637546 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:39:59.637559 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:39:59.637572 | orchestrator | 2026-03-05 00:39:59.637584 | orchestrator | 2026-03-05 00:39:59.637597 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:39:59.637610 | orchestrator | Thursday 05 March 2026 00:39:59 +0000 (0:00:00.554) 0:00:08.200 ******** 2026-03-05 00:39:59.637624 | orchestrator | =============================================================================== 2026-03-05 00:39:59.637636 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.43s 2026-03-05 00:39:59.637649 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-03-05 00:40:00.042957 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-05 00:40:00.058809 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-05 00:40:00.072228 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-05 00:40:00.090808 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-05 00:40:00.108375 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-05 00:40:00.125423 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-05 00:40:00.146362 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-05 00:40:00.165580 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-05 00:40:00.186864 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-05 00:40:00.204429 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-05 00:40:00.220985 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-05 00:40:00.240296 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-05 00:40:00.259468 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-05 00:40:00.273348 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-05 00:40:00.285603 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-05 00:40:00.297783 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-05 00:40:00.310169 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-05 00:40:00.321977 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-05 00:40:00.335253 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-05 00:40:00.347395 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-05 00:40:00.360211 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-05 00:40:00.379285 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-05 00:40:00.402723 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-05 00:40:00.425134 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-05 00:40:00.593903 | orchestrator | ok: Runtime: 0:24:42.250185 2026-03-05 00:40:00.705502 | 2026-03-05 00:40:00.705648 | TASK [Deploy services] 2026-03-05 00:40:01.239805 | orchestrator | skipping: Conditional result was False 2026-03-05 00:40:01.258585 | 2026-03-05 00:40:01.258758 | TASK [Deploy in a nutshell] 2026-03-05 00:40:01.976612 | orchestrator | + set -e 2026-03-05 00:40:01.978321 | orchestrator | 2026-03-05 00:40:01.978391 | orchestrator | # PULL IMAGES 2026-03-05 00:40:01.978402 | orchestrator | 2026-03-05 00:40:01.978421 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-05 00:40:01.978435 | orchestrator | ++ export INTERACTIVE=false 2026-03-05 00:40:01.978445 | orchestrator | ++ INTERACTIVE=false 2026-03-05 00:40:01.978475 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-05 00:40:01.978489 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-05 00:40:01.978497 | orchestrator | + source /opt/manager-vars.sh 2026-03-05 00:40:01.978505 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-05 00:40:01.978517 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-05 00:40:01.978524 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-05 00:40:01.978535 | orchestrator | ++ CEPH_VERSION=reef 2026-03-05 00:40:01.978542 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-05 00:40:01.978553 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-05 00:40:01.978559 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-05 00:40:01.978568 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-05 00:40:01.978575 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-05 00:40:01.978583 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-05 00:40:01.978589 | orchestrator | ++ export ARA=false 2026-03-05 00:40:01.978595 | orchestrator | ++ ARA=false 2026-03-05 00:40:01.978601 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-05 00:40:01.978608 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-05 00:40:01.978614 | orchestrator | ++ export TEMPEST=true 2026-03-05 00:40:01.978621 | orchestrator | ++ TEMPEST=true 2026-03-05 00:40:01.978627 | orchestrator | ++ export IS_ZUUL=true 2026-03-05 00:40:01.978633 | orchestrator | ++ IS_ZUUL=true 2026-03-05 00:40:01.978639 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.12 2026-03-05 00:40:01.978646 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.12 2026-03-05 00:40:01.978653 | orchestrator | ++ export EXTERNAL_API=false 2026-03-05 00:40:01.978659 | orchestrator | ++ EXTERNAL_API=false 2026-03-05 00:40:01.978665 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-05 00:40:01.978671 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-05 00:40:01.978677 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-05 00:40:01.978683 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-05 00:40:01.978690 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-05 00:40:01.978696 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-05 00:40:01.978702 | orchestrator | + echo 2026-03-05 00:40:01.978707 | orchestrator | + echo '# PULL IMAGES' 2026-03-05 00:40:01.978713 | orchestrator | + echo 2026-03-05 00:40:01.978729 | orchestrator | ++ semver latest 7.0.0 2026-03-05 00:40:02.034419 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-05 00:40:02.034530 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-05 00:40:02.034539 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-05 00:40:04.094305 | orchestrator | 2026-03-05 00:40:04 | INFO  | Trying to run play pull-images in environment custom 2026-03-05 00:40:14.134611 | orchestrator | 2026-03-05 00:40:14 | INFO  | Prepare task for execution of pull-images. 2026-03-05 00:40:14.219409 | orchestrator | 2026-03-05 00:40:14 | INFO  | Task f31c26d0-bdda-4a7c-9c87-6659bf3cffc5 (pull-images) was prepared for execution. 2026-03-05 00:40:14.219523 | orchestrator | 2026-03-05 00:40:14 | INFO  | Task f31c26d0-bdda-4a7c-9c87-6659bf3cffc5 is running in background. No more output. Check ARA for logs. 2026-03-05 00:40:16.721154 | orchestrator | 2026-03-05 00:40:16 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-05 00:40:26.868315 | orchestrator | 2026-03-05 00:40:26 | INFO  | Prepare task for execution of wipe-partitions. 2026-03-05 00:40:26.939905 | orchestrator | 2026-03-05 00:40:26 | INFO  | Task b725bb82-388c-400c-ae86-77602e0e6333 (wipe-partitions) was prepared for execution. 2026-03-05 00:40:26.939977 | orchestrator | 2026-03-05 00:40:26 | INFO  | It takes a moment until task b725bb82-388c-400c-ae86-77602e0e6333 (wipe-partitions) has been started and output is visible here. 2026-03-05 00:40:39.777973 | orchestrator | 2026-03-05 00:40:39.778206 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-05 00:40:39.778268 | orchestrator | 2026-03-05 00:40:39.778278 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-05 00:40:39.778386 | orchestrator | Thursday 05 March 2026 00:40:31 +0000 (0:00:00.135) 0:00:00.135 ******** 2026-03-05 00:40:39.778408 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:40:39.778414 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:40:39.778418 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:40:39.778421 | orchestrator | 2026-03-05 00:40:39.778426 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-05 00:40:39.778430 | orchestrator | Thursday 05 March 2026 00:40:32 +0000 (0:00:00.584) 0:00:00.719 ******** 2026-03-05 00:40:39.778438 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:40:39.778442 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:40:39.778446 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:40:39.778449 | orchestrator | 2026-03-05 00:40:39.778453 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-05 00:40:39.778457 | orchestrator | Thursday 05 March 2026 00:40:32 +0000 (0:00:00.396) 0:00:01.116 ******** 2026-03-05 00:40:39.778461 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:40:39.778466 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:40:39.778470 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:40:39.778473 | orchestrator | 2026-03-05 00:40:39.778477 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-05 00:40:39.778482 | orchestrator | Thursday 05 March 2026 00:40:33 +0000 (0:00:00.666) 0:00:01.783 ******** 2026-03-05 00:40:39.778486 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:40:39.778489 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:40:39.778493 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:40:39.778510 | orchestrator | 2026-03-05 00:40:39.778530 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-05 00:40:39.778534 | orchestrator | Thursday 05 March 2026 00:40:33 +0000 (0:00:00.300) 0:00:02.084 ******** 2026-03-05 00:40:39.778538 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-05 00:40:39.778582 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-05 00:40:39.778587 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-05 00:40:39.778591 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-05 00:40:39.778594 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-05 00:40:39.778624 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-05 00:40:39.778628 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-05 00:40:39.778631 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-05 00:40:39.778635 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-05 00:40:39.778639 | orchestrator | 2026-03-05 00:40:39.778643 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-05 00:40:39.778647 | orchestrator | Thursday 05 March 2026 00:40:34 +0000 (0:00:01.188) 0:00:03.272 ******** 2026-03-05 00:40:39.778651 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-05 00:40:39.778655 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-05 00:40:39.778658 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-05 00:40:39.778662 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-05 00:40:39.778666 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-05 00:40:39.778670 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-05 00:40:39.778673 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-05 00:40:39.778677 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-05 00:40:39.778681 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-05 00:40:39.778684 | orchestrator | 2026-03-05 00:40:39.778688 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-05 00:40:39.778692 | orchestrator | Thursday 05 March 2026 00:40:36 +0000 (0:00:01.510) 0:00:04.782 ******** 2026-03-05 00:40:39.778696 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-05 00:40:39.778699 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-05 00:40:39.778703 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-05 00:40:39.778711 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-05 00:40:39.778719 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-05 00:40:39.778723 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-05 00:40:39.778727 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-05 00:40:39.778731 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-05 00:40:39.778734 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-05 00:40:39.778738 | orchestrator | 2026-03-05 00:40:39.778742 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-05 00:40:39.778746 | orchestrator | Thursday 05 March 2026 00:40:38 +0000 (0:00:02.011) 0:00:06.794 ******** 2026-03-05 00:40:39.778750 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:40:39.778765 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:40:39.778784 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:40:39.778812 | orchestrator | 2026-03-05 00:40:39.778816 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-05 00:40:39.778820 | orchestrator | Thursday 05 March 2026 00:40:38 +0000 (0:00:00.603) 0:00:07.397 ******** 2026-03-05 00:40:39.778824 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:40:39.778828 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:40:39.778843 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:40:39.778847 | orchestrator | 2026-03-05 00:40:39.778851 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:40:39.778857 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:40:39.778862 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:40:39.778879 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:40:39.778883 | orchestrator | 2026-03-05 00:40:39.778887 | orchestrator | 2026-03-05 00:40:39.778891 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:40:39.778894 | orchestrator | Thursday 05 March 2026 00:40:39 +0000 (0:00:00.623) 0:00:08.021 ******** 2026-03-05 00:40:39.778898 | orchestrator | =============================================================================== 2026-03-05 00:40:39.778902 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.01s 2026-03-05 00:40:39.778906 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.51s 2026-03-05 00:40:39.778909 | orchestrator | Check device availability ----------------------------------------------- 1.19s 2026-03-05 00:40:39.778913 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.67s 2026-03-05 00:40:39.778917 | orchestrator | Request device events from the kernel ----------------------------------- 0.62s 2026-03-05 00:40:39.778921 | orchestrator | Reload udev rules ------------------------------------------------------- 0.60s 2026-03-05 00:40:39.778924 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.58s 2026-03-05 00:40:39.778928 | orchestrator | Remove all rook related logical devices --------------------------------- 0.40s 2026-03-05 00:40:39.778932 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.30s 2026-03-05 00:40:52.329526 | orchestrator | 2026-03-05 00:40:52 | INFO  | Prepare task for execution of facts. 2026-03-05 00:40:52.432423 | orchestrator | 2026-03-05 00:40:52 | INFO  | Task 4554e96d-c286-4571-a4b5-2ec9a3b89847 (facts) was prepared for execution. 2026-03-05 00:40:52.432510 | orchestrator | 2026-03-05 00:40:52 | INFO  | It takes a moment until task 4554e96d-c286-4571-a4b5-2ec9a3b89847 (facts) has been started and output is visible here. 2026-03-05 00:41:04.696335 | orchestrator | 2026-03-05 00:41:04.696453 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-05 00:41:04.696471 | orchestrator | 2026-03-05 00:41:04.696525 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-05 00:41:04.696545 | orchestrator | Thursday 05 March 2026 00:40:56 +0000 (0:00:00.285) 0:00:00.285 ******** 2026-03-05 00:41:04.696563 | orchestrator | ok: [testbed-manager] 2026-03-05 00:41:04.696583 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:41:04.696601 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:41:04.696619 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:41:04.696636 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:41:04.696654 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:41:04.696673 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:41:04.696692 | orchestrator | 2026-03-05 00:41:04.696733 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-05 00:41:04.696752 | orchestrator | Thursday 05 March 2026 00:40:57 +0000 (0:00:01.108) 0:00:01.394 ******** 2026-03-05 00:41:04.696770 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:41:04.696788 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:41:04.696806 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:41:04.696827 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:41:04.696846 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:04.696865 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:04.696883 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:04.696903 | orchestrator | 2026-03-05 00:41:04.696924 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-05 00:41:04.696943 | orchestrator | 2026-03-05 00:41:04.696991 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-05 00:41:04.697013 | orchestrator | Thursday 05 March 2026 00:40:59 +0000 (0:00:01.244) 0:00:02.638 ******** 2026-03-05 00:41:04.697033 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:41:04.697124 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:41:04.697149 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:41:04.697180 | orchestrator | ok: [testbed-manager] 2026-03-05 00:41:04.697192 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:41:04.697203 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:41:04.697213 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:41:04.697224 | orchestrator | 2026-03-05 00:41:04.697236 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-05 00:41:04.697247 | orchestrator | 2026-03-05 00:41:04.697257 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-05 00:41:04.697272 | orchestrator | Thursday 05 March 2026 00:41:03 +0000 (0:00:04.552) 0:00:07.190 ******** 2026-03-05 00:41:04.697291 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:41:04.697313 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:41:04.697334 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:41:04.697350 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:41:04.697361 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:04.697372 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:04.697382 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:04.697415 | orchestrator | 2026-03-05 00:41:04.697428 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:41:04.697439 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:41:04.697452 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:41:04.697463 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:41:04.697477 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:41:04.697497 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:41:04.697537 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:41:04.697560 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:41:04.697581 | orchestrator | 2026-03-05 00:41:04.697601 | orchestrator | 2026-03-05 00:41:04.697612 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:41:04.697623 | orchestrator | Thursday 05 March 2026 00:41:04 +0000 (0:00:00.544) 0:00:07.735 ******** 2026-03-05 00:41:04.697634 | orchestrator | =============================================================================== 2026-03-05 00:41:04.697645 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.55s 2026-03-05 00:41:04.697660 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.24s 2026-03-05 00:41:04.697679 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.11s 2026-03-05 00:41:04.697699 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-03-05 00:41:07.148789 | orchestrator | 2026-03-05 00:41:07 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-03-05 00:41:07.212716 | orchestrator | 2026-03-05 00:41:07 | INFO  | Task 7fb0dcaa-a7a5-48c0-b16e-a1b744a6c68b (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-05 00:41:07.212810 | orchestrator | 2026-03-05 00:41:07 | INFO  | It takes a moment until task 7fb0dcaa-a7a5-48c0-b16e-a1b744a6c68b (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-05 00:41:19.308019 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-05 00:41:19.308176 | orchestrator | 2.16.14 2026-03-05 00:41:19.308190 | orchestrator | 2026-03-05 00:41:19.308205 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-05 00:41:19.308213 | orchestrator | 2026-03-05 00:41:19.308220 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-05 00:41:19.308227 | orchestrator | Thursday 05 March 2026 00:41:11 +0000 (0:00:00.321) 0:00:00.321 ******** 2026-03-05 00:41:19.308234 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-05 00:41:19.308240 | orchestrator | 2026-03-05 00:41:19.308246 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-05 00:41:19.308253 | orchestrator | Thursday 05 March 2026 00:41:11 +0000 (0:00:00.250) 0:00:00.572 ******** 2026-03-05 00:41:19.308260 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:41:19.308266 | orchestrator | 2026-03-05 00:41:19.308272 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:19.308278 | orchestrator | Thursday 05 March 2026 00:41:12 +0000 (0:00:00.225) 0:00:00.797 ******** 2026-03-05 00:41:19.308285 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-05 00:41:19.308293 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-05 00:41:19.308303 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-05 00:41:19.308313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-05 00:41:19.308323 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-05 00:41:19.308332 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-05 00:41:19.308343 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-05 00:41:19.308352 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-05 00:41:19.308362 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-05 00:41:19.308372 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-05 00:41:19.308406 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-05 00:41:19.308417 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-05 00:41:19.308427 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-05 00:41:19.308433 | orchestrator | 2026-03-05 00:41:19.308440 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:19.308446 | orchestrator | Thursday 05 March 2026 00:41:12 +0000 (0:00:00.509) 0:00:01.307 ******** 2026-03-05 00:41:19.308452 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:19.308458 | orchestrator | 2026-03-05 00:41:19.308464 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:19.308471 | orchestrator | Thursday 05 March 2026 00:41:12 +0000 (0:00:00.194) 0:00:01.501 ******** 2026-03-05 00:41:19.308477 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:19.308483 | orchestrator | 2026-03-05 00:41:19.308489 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:19.308499 | orchestrator | Thursday 05 March 2026 00:41:13 +0000 (0:00:00.193) 0:00:01.695 ******** 2026-03-05 00:41:19.308505 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:19.308512 | orchestrator | 2026-03-05 00:41:19.308518 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:19.308524 | orchestrator | Thursday 05 March 2026 00:41:13 +0000 (0:00:00.203) 0:00:01.898 ******** 2026-03-05 00:41:19.308530 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:19.308536 | orchestrator | 2026-03-05 00:41:19.308543 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:19.308549 | orchestrator | Thursday 05 March 2026 00:41:13 +0000 (0:00:00.236) 0:00:02.135 ******** 2026-03-05 00:41:19.308555 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:19.308561 | orchestrator | 2026-03-05 00:41:19.308567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:19.308574 | orchestrator | Thursday 05 March 2026 00:41:13 +0000 (0:00:00.217) 0:00:02.353 ******** 2026-03-05 00:41:19.308580 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:19.308586 | orchestrator | 2026-03-05 00:41:19.308592 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:19.308598 | orchestrator | Thursday 05 March 2026 00:41:13 +0000 (0:00:00.208) 0:00:02.561 ******** 2026-03-05 00:41:19.308604 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:19.308610 | orchestrator | 2026-03-05 00:41:19.308616 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:19.308623 | orchestrator | Thursday 05 March 2026 00:41:14 +0000 (0:00:00.207) 0:00:02.768 ******** 2026-03-05 00:41:19.308629 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:19.308635 | orchestrator | 2026-03-05 00:41:19.308641 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:19.308647 | orchestrator | Thursday 05 March 2026 00:41:14 +0000 (0:00:00.233) 0:00:03.002 ******** 2026-03-05 00:41:19.308653 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1) 2026-03-05 00:41:19.308661 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1) 2026-03-05 00:41:19.308667 | orchestrator | 2026-03-05 00:41:19.308673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:19.308693 | orchestrator | Thursday 05 March 2026 00:41:14 +0000 (0:00:00.416) 0:00:03.418 ******** 2026-03-05 00:41:19.308700 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_46af06ac-e806-45b3-baa6-786374d24d95) 2026-03-05 00:41:19.308706 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_46af06ac-e806-45b3-baa6-786374d24d95) 2026-03-05 00:41:19.308712 | orchestrator | 2026-03-05 00:41:19.308718 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:19.308729 | orchestrator | Thursday 05 March 2026 00:41:15 +0000 (0:00:00.636) 0:00:04.055 ******** 2026-03-05 00:41:19.308735 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_94265553-26b7-47c9-a922-5463d2be5f34) 2026-03-05 00:41:19.308742 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_94265553-26b7-47c9-a922-5463d2be5f34) 2026-03-05 00:41:19.308748 | orchestrator | 2026-03-05 00:41:19.308759 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:19.308772 | orchestrator | Thursday 05 March 2026 00:41:16 +0000 (0:00:00.673) 0:00:04.729 ******** 2026-03-05 00:41:19.308788 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_db99048b-c1ef-4f9e-82d3-cd84d3f63e80) 2026-03-05 00:41:19.308798 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_db99048b-c1ef-4f9e-82d3-cd84d3f63e80) 2026-03-05 00:41:19.308808 | orchestrator | 2026-03-05 00:41:19.308818 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:19.308828 | orchestrator | Thursday 05 March 2026 00:41:17 +0000 (0:00:00.910) 0:00:05.639 ******** 2026-03-05 00:41:19.308837 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-05 00:41:19.308846 | orchestrator | 2026-03-05 00:41:19.308857 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:19.308866 | orchestrator | Thursday 05 March 2026 00:41:17 +0000 (0:00:00.378) 0:00:06.017 ******** 2026-03-05 00:41:19.308881 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-05 00:41:19.308892 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-05 00:41:19.308901 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-05 00:41:19.308911 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-05 00:41:19.308922 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-05 00:41:19.308930 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-05 00:41:19.308939 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-05 00:41:19.308949 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-05 00:41:19.308960 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-05 00:41:19.308970 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-05 00:41:19.308979 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-05 00:41:19.308989 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-05 00:41:19.308998 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-05 00:41:19.309008 | orchestrator | 2026-03-05 00:41:19.309018 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:19.309028 | orchestrator | Thursday 05 March 2026 00:41:17 +0000 (0:00:00.374) 0:00:06.392 ******** 2026-03-05 00:41:19.309038 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:19.309048 | orchestrator | 2026-03-05 00:41:19.309059 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:19.309094 | orchestrator | Thursday 05 March 2026 00:41:18 +0000 (0:00:00.218) 0:00:06.610 ******** 2026-03-05 00:41:19.309104 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:19.309115 | orchestrator | 2026-03-05 00:41:19.309126 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:19.309137 | orchestrator | Thursday 05 March 2026 00:41:18 +0000 (0:00:00.209) 0:00:06.820 ******** 2026-03-05 00:41:19.309147 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:19.309166 | orchestrator | 2026-03-05 00:41:19.309176 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:19.309185 | orchestrator | Thursday 05 March 2026 00:41:18 +0000 (0:00:00.220) 0:00:07.041 ******** 2026-03-05 00:41:19.309195 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:19.309203 | orchestrator | 2026-03-05 00:41:19.309212 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:19.309221 | orchestrator | Thursday 05 March 2026 00:41:18 +0000 (0:00:00.212) 0:00:07.253 ******** 2026-03-05 00:41:19.309230 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:19.309240 | orchestrator | 2026-03-05 00:41:19.309255 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:19.309265 | orchestrator | Thursday 05 March 2026 00:41:18 +0000 (0:00:00.199) 0:00:07.453 ******** 2026-03-05 00:41:19.309275 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:19.309284 | orchestrator | 2026-03-05 00:41:19.309294 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:19.309303 | orchestrator | Thursday 05 March 2026 00:41:19 +0000 (0:00:00.198) 0:00:07.652 ******** 2026-03-05 00:41:19.309313 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:19.309322 | orchestrator | 2026-03-05 00:41:19.309342 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:27.414505 | orchestrator | Thursday 05 March 2026 00:41:19 +0000 (0:00:00.222) 0:00:07.875 ******** 2026-03-05 00:41:27.414582 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:27.414589 | orchestrator | 2026-03-05 00:41:27.414594 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:27.414598 | orchestrator | Thursday 05 March 2026 00:41:19 +0000 (0:00:00.194) 0:00:08.069 ******** 2026-03-05 00:41:27.414603 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-05 00:41:27.414607 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-05 00:41:27.414612 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-05 00:41:27.414615 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-05 00:41:27.414619 | orchestrator | 2026-03-05 00:41:27.414623 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:27.414627 | orchestrator | Thursday 05 March 2026 00:41:20 +0000 (0:00:01.091) 0:00:09.161 ******** 2026-03-05 00:41:27.414631 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:27.414634 | orchestrator | 2026-03-05 00:41:27.414638 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:27.414642 | orchestrator | Thursday 05 March 2026 00:41:20 +0000 (0:00:00.202) 0:00:09.363 ******** 2026-03-05 00:41:27.414646 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:27.414649 | orchestrator | 2026-03-05 00:41:27.414653 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:27.414657 | orchestrator | Thursday 05 March 2026 00:41:21 +0000 (0:00:00.221) 0:00:09.585 ******** 2026-03-05 00:41:27.414660 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:27.414664 | orchestrator | 2026-03-05 00:41:27.414668 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:27.414672 | orchestrator | Thursday 05 March 2026 00:41:21 +0000 (0:00:00.213) 0:00:09.798 ******** 2026-03-05 00:41:27.414675 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:27.414679 | orchestrator | 2026-03-05 00:41:27.414683 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-05 00:41:27.414686 | orchestrator | Thursday 05 March 2026 00:41:21 +0000 (0:00:00.235) 0:00:10.033 ******** 2026-03-05 00:41:27.414691 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-05 00:41:27.414695 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-05 00:41:27.414698 | orchestrator | 2026-03-05 00:41:27.414702 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-05 00:41:27.414706 | orchestrator | Thursday 05 March 2026 00:41:21 +0000 (0:00:00.182) 0:00:10.216 ******** 2026-03-05 00:41:27.414723 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:27.414727 | orchestrator | 2026-03-05 00:41:27.414731 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-05 00:41:27.414734 | orchestrator | Thursday 05 March 2026 00:41:21 +0000 (0:00:00.147) 0:00:10.364 ******** 2026-03-05 00:41:27.414738 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:27.414742 | orchestrator | 2026-03-05 00:41:27.414747 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-05 00:41:27.414750 | orchestrator | Thursday 05 March 2026 00:41:21 +0000 (0:00:00.148) 0:00:10.512 ******** 2026-03-05 00:41:27.414754 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:27.414758 | orchestrator | 2026-03-05 00:41:27.414761 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-05 00:41:27.414765 | orchestrator | Thursday 05 March 2026 00:41:22 +0000 (0:00:00.144) 0:00:10.656 ******** 2026-03-05 00:41:27.414769 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:41:27.414773 | orchestrator | 2026-03-05 00:41:27.414777 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-05 00:41:27.414780 | orchestrator | Thursday 05 March 2026 00:41:22 +0000 (0:00:00.148) 0:00:10.804 ******** 2026-03-05 00:41:27.414785 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f88409fd-5147-5194-8288-2488b5e44352'}}) 2026-03-05 00:41:27.414789 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9d6733ad-9ad8-5bce-b749-e645aedee181'}}) 2026-03-05 00:41:27.414793 | orchestrator | 2026-03-05 00:41:27.414797 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-05 00:41:27.414800 | orchestrator | Thursday 05 March 2026 00:41:22 +0000 (0:00:00.195) 0:00:11.000 ******** 2026-03-05 00:41:27.414805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f88409fd-5147-5194-8288-2488b5e44352'}})  2026-03-05 00:41:27.414817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9d6733ad-9ad8-5bce-b749-e645aedee181'}})  2026-03-05 00:41:27.414821 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:27.414825 | orchestrator | 2026-03-05 00:41:27.414829 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-05 00:41:27.414833 | orchestrator | Thursday 05 March 2026 00:41:22 +0000 (0:00:00.150) 0:00:11.151 ******** 2026-03-05 00:41:27.414837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f88409fd-5147-5194-8288-2488b5e44352'}})  2026-03-05 00:41:27.414841 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9d6733ad-9ad8-5bce-b749-e645aedee181'}})  2026-03-05 00:41:27.414844 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:27.414848 | orchestrator | 2026-03-05 00:41:27.414899 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-05 00:41:27.414904 | orchestrator | Thursday 05 March 2026 00:41:22 +0000 (0:00:00.401) 0:00:11.552 ******** 2026-03-05 00:41:27.414908 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f88409fd-5147-5194-8288-2488b5e44352'}})  2026-03-05 00:41:27.414930 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9d6733ad-9ad8-5bce-b749-e645aedee181'}})  2026-03-05 00:41:27.414934 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:27.414944 | orchestrator | 2026-03-05 00:41:27.414948 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-05 00:41:27.414952 | orchestrator | Thursday 05 March 2026 00:41:23 +0000 (0:00:00.162) 0:00:11.715 ******** 2026-03-05 00:41:27.414956 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:41:27.414960 | orchestrator | 2026-03-05 00:41:27.414963 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-05 00:41:27.414967 | orchestrator | Thursday 05 March 2026 00:41:23 +0000 (0:00:00.141) 0:00:11.856 ******** 2026-03-05 00:41:27.414971 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:41:27.414980 | orchestrator | 2026-03-05 00:41:27.414983 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-05 00:41:27.414987 | orchestrator | Thursday 05 March 2026 00:41:23 +0000 (0:00:00.158) 0:00:12.015 ******** 2026-03-05 00:41:27.414991 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:27.414995 | orchestrator | 2026-03-05 00:41:27.415005 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-05 00:41:27.415009 | orchestrator | Thursday 05 March 2026 00:41:23 +0000 (0:00:00.147) 0:00:12.163 ******** 2026-03-05 00:41:27.415012 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:27.415016 | orchestrator | 2026-03-05 00:41:27.415020 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-05 00:41:27.415024 | orchestrator | Thursday 05 March 2026 00:41:23 +0000 (0:00:00.152) 0:00:12.315 ******** 2026-03-05 00:41:27.415027 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:27.415031 | orchestrator | 2026-03-05 00:41:27.415035 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-05 00:41:27.415038 | orchestrator | Thursday 05 March 2026 00:41:23 +0000 (0:00:00.134) 0:00:12.450 ******** 2026-03-05 00:41:27.415042 | orchestrator | ok: [testbed-node-3] => { 2026-03-05 00:41:27.415046 | orchestrator |  "ceph_osd_devices": { 2026-03-05 00:41:27.415050 | orchestrator |  "sdb": { 2026-03-05 00:41:27.415054 | orchestrator |  "osd_lvm_uuid": "f88409fd-5147-5194-8288-2488b5e44352" 2026-03-05 00:41:27.415058 | orchestrator |  }, 2026-03-05 00:41:27.415062 | orchestrator |  "sdc": { 2026-03-05 00:41:27.415066 | orchestrator |  "osd_lvm_uuid": "9d6733ad-9ad8-5bce-b749-e645aedee181" 2026-03-05 00:41:27.415070 | orchestrator |  } 2026-03-05 00:41:27.415102 | orchestrator |  } 2026-03-05 00:41:27.415107 | orchestrator | } 2026-03-05 00:41:27.415112 | orchestrator | 2026-03-05 00:41:27.415116 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-05 00:41:27.415120 | orchestrator | Thursday 05 March 2026 00:41:24 +0000 (0:00:00.150) 0:00:12.601 ******** 2026-03-05 00:41:27.415125 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:27.415129 | orchestrator | 2026-03-05 00:41:27.415134 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-05 00:41:27.415138 | orchestrator | Thursday 05 March 2026 00:41:24 +0000 (0:00:00.137) 0:00:12.738 ******** 2026-03-05 00:41:27.415143 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:27.415147 | orchestrator | 2026-03-05 00:41:27.415152 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-05 00:41:27.415156 | orchestrator | Thursday 05 March 2026 00:41:24 +0000 (0:00:00.138) 0:00:12.876 ******** 2026-03-05 00:41:27.415161 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:27.415165 | orchestrator | 2026-03-05 00:41:27.415170 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-05 00:41:27.415174 | orchestrator | Thursday 05 March 2026 00:41:24 +0000 (0:00:00.157) 0:00:13.034 ******** 2026-03-05 00:41:27.415179 | orchestrator | changed: [testbed-node-3] => { 2026-03-05 00:41:27.415183 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-05 00:41:27.415188 | orchestrator |  "ceph_osd_devices": { 2026-03-05 00:41:27.415192 | orchestrator |  "sdb": { 2026-03-05 00:41:27.415197 | orchestrator |  "osd_lvm_uuid": "f88409fd-5147-5194-8288-2488b5e44352" 2026-03-05 00:41:27.415202 | orchestrator |  }, 2026-03-05 00:41:27.415205 | orchestrator |  "sdc": { 2026-03-05 00:41:27.415209 | orchestrator |  "osd_lvm_uuid": "9d6733ad-9ad8-5bce-b749-e645aedee181" 2026-03-05 00:41:27.415213 | orchestrator |  } 2026-03-05 00:41:27.415217 | orchestrator |  }, 2026-03-05 00:41:27.415220 | orchestrator |  "lvm_volumes": [ 2026-03-05 00:41:27.415224 | orchestrator |  { 2026-03-05 00:41:27.415228 | orchestrator |  "data": "osd-block-f88409fd-5147-5194-8288-2488b5e44352", 2026-03-05 00:41:27.415232 | orchestrator |  "data_vg": "ceph-f88409fd-5147-5194-8288-2488b5e44352" 2026-03-05 00:41:27.415239 | orchestrator |  }, 2026-03-05 00:41:27.415243 | orchestrator |  { 2026-03-05 00:41:27.415246 | orchestrator |  "data": "osd-block-9d6733ad-9ad8-5bce-b749-e645aedee181", 2026-03-05 00:41:27.415250 | orchestrator |  "data_vg": "ceph-9d6733ad-9ad8-5bce-b749-e645aedee181" 2026-03-05 00:41:27.415254 | orchestrator |  } 2026-03-05 00:41:27.415258 | orchestrator |  ] 2026-03-05 00:41:27.415262 | orchestrator |  } 2026-03-05 00:41:27.415265 | orchestrator | } 2026-03-05 00:41:27.415269 | orchestrator | 2026-03-05 00:41:27.415273 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-05 00:41:27.415277 | orchestrator | Thursday 05 March 2026 00:41:24 +0000 (0:00:00.500) 0:00:13.534 ******** 2026-03-05 00:41:27.415280 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-05 00:41:27.415284 | orchestrator | 2026-03-05 00:41:27.415288 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-05 00:41:27.415292 | orchestrator | 2026-03-05 00:41:27.415295 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-05 00:41:27.415299 | orchestrator | Thursday 05 March 2026 00:41:26 +0000 (0:00:01.956) 0:00:15.490 ******** 2026-03-05 00:41:27.415303 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-05 00:41:27.415306 | orchestrator | 2026-03-05 00:41:27.415313 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-05 00:41:27.415317 | orchestrator | Thursday 05 March 2026 00:41:27 +0000 (0:00:00.262) 0:00:15.752 ******** 2026-03-05 00:41:27.415321 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:41:27.415325 | orchestrator | 2026-03-05 00:41:27.415331 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:35.960398 | orchestrator | Thursday 05 March 2026 00:41:27 +0000 (0:00:00.234) 0:00:15.987 ******** 2026-03-05 00:41:35.960500 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-05 00:41:35.960515 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-05 00:41:35.960527 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-05 00:41:35.960538 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-05 00:41:35.960548 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-05 00:41:35.960559 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-05 00:41:35.960570 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-05 00:41:35.960585 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-05 00:41:35.960596 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-05 00:41:35.960608 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-05 00:41:35.960619 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-05 00:41:35.960629 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-05 00:41:35.960640 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-05 00:41:35.960651 | orchestrator | 2026-03-05 00:41:35.960663 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:35.960674 | orchestrator | Thursday 05 March 2026 00:41:27 +0000 (0:00:00.407) 0:00:16.395 ******** 2026-03-05 00:41:35.960686 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:35.960698 | orchestrator | 2026-03-05 00:41:35.960709 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:35.960720 | orchestrator | Thursday 05 March 2026 00:41:28 +0000 (0:00:00.211) 0:00:16.606 ******** 2026-03-05 00:41:35.960755 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:35.960767 | orchestrator | 2026-03-05 00:41:35.960778 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:35.960789 | orchestrator | Thursday 05 March 2026 00:41:28 +0000 (0:00:00.190) 0:00:16.797 ******** 2026-03-05 00:41:35.960799 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:35.960810 | orchestrator | 2026-03-05 00:41:35.960821 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:35.960832 | orchestrator | Thursday 05 March 2026 00:41:28 +0000 (0:00:00.197) 0:00:16.995 ******** 2026-03-05 00:41:35.960843 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:35.960854 | orchestrator | 2026-03-05 00:41:35.960865 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:35.960876 | orchestrator | Thursday 05 March 2026 00:41:28 +0000 (0:00:00.184) 0:00:17.179 ******** 2026-03-05 00:41:35.960886 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:35.960897 | orchestrator | 2026-03-05 00:41:35.960908 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:35.960919 | orchestrator | Thursday 05 March 2026 00:41:29 +0000 (0:00:00.650) 0:00:17.830 ******** 2026-03-05 00:41:35.960930 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:35.960943 | orchestrator | 2026-03-05 00:41:35.960957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:35.960970 | orchestrator | Thursday 05 March 2026 00:41:29 +0000 (0:00:00.212) 0:00:18.043 ******** 2026-03-05 00:41:35.960982 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:35.960996 | orchestrator | 2026-03-05 00:41:35.961008 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:35.961021 | orchestrator | Thursday 05 March 2026 00:41:29 +0000 (0:00:00.196) 0:00:18.239 ******** 2026-03-05 00:41:35.961033 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:35.961047 | orchestrator | 2026-03-05 00:41:35.961059 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:35.961071 | orchestrator | Thursday 05 March 2026 00:41:29 +0000 (0:00:00.240) 0:00:18.480 ******** 2026-03-05 00:41:35.961126 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f) 2026-03-05 00:41:35.961139 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f) 2026-03-05 00:41:35.961150 | orchestrator | 2026-03-05 00:41:35.961177 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:35.961189 | orchestrator | Thursday 05 March 2026 00:41:30 +0000 (0:00:00.557) 0:00:19.037 ******** 2026-03-05 00:41:35.961200 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fde3cda2-3067-4d86-95c6-d39f62804520) 2026-03-05 00:41:35.961211 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fde3cda2-3067-4d86-95c6-d39f62804520) 2026-03-05 00:41:35.961222 | orchestrator | 2026-03-05 00:41:35.961232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:35.961243 | orchestrator | Thursday 05 March 2026 00:41:30 +0000 (0:00:00.487) 0:00:19.524 ******** 2026-03-05 00:41:35.961254 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5fc6e5d1-feaa-44be-badf-9551630a8ded) 2026-03-05 00:41:35.961265 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5fc6e5d1-feaa-44be-badf-9551630a8ded) 2026-03-05 00:41:35.961275 | orchestrator | 2026-03-05 00:41:35.961286 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:35.961314 | orchestrator | Thursday 05 March 2026 00:41:31 +0000 (0:00:00.593) 0:00:20.118 ******** 2026-03-05 00:41:35.961326 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4cc63ee5-51dc-4d14-b9fb-faf031b30aaa) 2026-03-05 00:41:35.961337 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4cc63ee5-51dc-4d14-b9fb-faf031b30aaa) 2026-03-05 00:41:35.961348 | orchestrator | 2026-03-05 00:41:35.961366 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:35.961376 | orchestrator | Thursday 05 March 2026 00:41:32 +0000 (0:00:00.516) 0:00:20.634 ******** 2026-03-05 00:41:35.961387 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-05 00:41:35.961398 | orchestrator | 2026-03-05 00:41:35.961408 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:35.961419 | orchestrator | Thursday 05 March 2026 00:41:32 +0000 (0:00:00.333) 0:00:20.968 ******** 2026-03-05 00:41:35.961430 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-05 00:41:35.961441 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-05 00:41:35.961451 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-05 00:41:35.961462 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-05 00:41:35.961473 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-05 00:41:35.961483 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-05 00:41:35.961494 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-05 00:41:35.961505 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-05 00:41:35.961515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-05 00:41:35.961526 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-05 00:41:35.961536 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-05 00:41:35.961547 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-05 00:41:35.961557 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-05 00:41:35.961568 | orchestrator | 2026-03-05 00:41:35.961579 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:35.961589 | orchestrator | Thursday 05 March 2026 00:41:32 +0000 (0:00:00.378) 0:00:21.346 ******** 2026-03-05 00:41:35.961600 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:35.961611 | orchestrator | 2026-03-05 00:41:35.961621 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:35.961632 | orchestrator | Thursday 05 March 2026 00:41:33 +0000 (0:00:00.704) 0:00:22.051 ******** 2026-03-05 00:41:35.961643 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:35.961653 | orchestrator | 2026-03-05 00:41:35.961664 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:35.961675 | orchestrator | Thursday 05 March 2026 00:41:33 +0000 (0:00:00.190) 0:00:22.241 ******** 2026-03-05 00:41:35.961686 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:35.961696 | orchestrator | 2026-03-05 00:41:35.961707 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:35.961718 | orchestrator | Thursday 05 March 2026 00:41:33 +0000 (0:00:00.190) 0:00:22.432 ******** 2026-03-05 00:41:35.961728 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:35.961739 | orchestrator | 2026-03-05 00:41:35.961749 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:35.961760 | orchestrator | Thursday 05 March 2026 00:41:34 +0000 (0:00:00.202) 0:00:22.634 ******** 2026-03-05 00:41:35.961771 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:35.961781 | orchestrator | 2026-03-05 00:41:35.961792 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:35.961803 | orchestrator | Thursday 05 March 2026 00:41:34 +0000 (0:00:00.185) 0:00:22.820 ******** 2026-03-05 00:41:35.961814 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:35.961831 | orchestrator | 2026-03-05 00:41:35.961847 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:35.961858 | orchestrator | Thursday 05 March 2026 00:41:34 +0000 (0:00:00.235) 0:00:23.056 ******** 2026-03-05 00:41:35.961869 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:35.961879 | orchestrator | 2026-03-05 00:41:35.961890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:35.961901 | orchestrator | Thursday 05 March 2026 00:41:34 +0000 (0:00:00.220) 0:00:23.276 ******** 2026-03-05 00:41:35.961911 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:35.961922 | orchestrator | 2026-03-05 00:41:35.961933 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:35.961944 | orchestrator | Thursday 05 March 2026 00:41:34 +0000 (0:00:00.216) 0:00:23.493 ******** 2026-03-05 00:41:35.961954 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-05 00:41:35.961966 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-05 00:41:35.961976 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-05 00:41:35.961987 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-05 00:41:35.961998 | orchestrator | 2026-03-05 00:41:35.962009 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:35.962074 | orchestrator | Thursday 05 March 2026 00:41:35 +0000 (0:00:00.893) 0:00:24.387 ******** 2026-03-05 00:41:35.962104 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:43.813688 | orchestrator | 2026-03-05 00:41:43.813793 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:43.813822 | orchestrator | Thursday 05 March 2026 00:41:36 +0000 (0:00:00.225) 0:00:24.613 ******** 2026-03-05 00:41:43.813842 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:43.813861 | orchestrator | 2026-03-05 00:41:43.813880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:43.813900 | orchestrator | Thursday 05 March 2026 00:41:36 +0000 (0:00:00.196) 0:00:24.809 ******** 2026-03-05 00:41:43.813918 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:43.813938 | orchestrator | 2026-03-05 00:41:43.813956 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:43.813974 | orchestrator | Thursday 05 March 2026 00:41:36 +0000 (0:00:00.220) 0:00:25.030 ******** 2026-03-05 00:41:43.813993 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:43.814012 | orchestrator | 2026-03-05 00:41:43.814138 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-05 00:41:43.814175 | orchestrator | Thursday 05 March 2026 00:41:37 +0000 (0:00:00.781) 0:00:25.811 ******** 2026-03-05 00:41:43.814196 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-05 00:41:43.814216 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-05 00:41:43.814238 | orchestrator | 2026-03-05 00:41:43.814260 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-05 00:41:43.814282 | orchestrator | Thursday 05 March 2026 00:41:37 +0000 (0:00:00.222) 0:00:26.034 ******** 2026-03-05 00:41:43.814303 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:43.814324 | orchestrator | 2026-03-05 00:41:43.814344 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-05 00:41:43.814365 | orchestrator | Thursday 05 March 2026 00:41:37 +0000 (0:00:00.157) 0:00:26.192 ******** 2026-03-05 00:41:43.814386 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:43.814398 | orchestrator | 2026-03-05 00:41:43.814409 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-05 00:41:43.814420 | orchestrator | Thursday 05 March 2026 00:41:37 +0000 (0:00:00.141) 0:00:26.333 ******** 2026-03-05 00:41:43.814430 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:43.814441 | orchestrator | 2026-03-05 00:41:43.814452 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-05 00:41:43.814462 | orchestrator | Thursday 05 March 2026 00:41:37 +0000 (0:00:00.173) 0:00:26.507 ******** 2026-03-05 00:41:43.814495 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:41:43.814507 | orchestrator | 2026-03-05 00:41:43.814518 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-05 00:41:43.814528 | orchestrator | Thursday 05 March 2026 00:41:38 +0000 (0:00:00.181) 0:00:26.688 ******** 2026-03-05 00:41:43.814540 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '130794de-baff-5f0b-9c30-9a8206b73831'}}) 2026-03-05 00:41:43.814551 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '54671a7c-dad9-563e-9508-4448c9acfc6a'}}) 2026-03-05 00:41:43.814561 | orchestrator | 2026-03-05 00:41:43.814572 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-05 00:41:43.814583 | orchestrator | Thursday 05 March 2026 00:41:38 +0000 (0:00:00.219) 0:00:26.908 ******** 2026-03-05 00:41:43.814594 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '130794de-baff-5f0b-9c30-9a8206b73831'}})  2026-03-05 00:41:43.814606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '54671a7c-dad9-563e-9508-4448c9acfc6a'}})  2026-03-05 00:41:43.814617 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:43.814627 | orchestrator | 2026-03-05 00:41:43.814638 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-05 00:41:43.814649 | orchestrator | Thursday 05 March 2026 00:41:38 +0000 (0:00:00.179) 0:00:27.087 ******** 2026-03-05 00:41:43.814660 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '130794de-baff-5f0b-9c30-9a8206b73831'}})  2026-03-05 00:41:43.814671 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '54671a7c-dad9-563e-9508-4448c9acfc6a'}})  2026-03-05 00:41:43.814683 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:43.814693 | orchestrator | 2026-03-05 00:41:43.814704 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-05 00:41:43.814715 | orchestrator | Thursday 05 March 2026 00:41:38 +0000 (0:00:00.216) 0:00:27.304 ******** 2026-03-05 00:41:43.814725 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '130794de-baff-5f0b-9c30-9a8206b73831'}})  2026-03-05 00:41:43.814736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '54671a7c-dad9-563e-9508-4448c9acfc6a'}})  2026-03-05 00:41:43.814747 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:43.814757 | orchestrator | 2026-03-05 00:41:43.814783 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-05 00:41:43.814794 | orchestrator | Thursday 05 March 2026 00:41:38 +0000 (0:00:00.171) 0:00:27.476 ******** 2026-03-05 00:41:43.814805 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:41:43.814816 | orchestrator | 2026-03-05 00:41:43.814826 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-05 00:41:43.814837 | orchestrator | Thursday 05 March 2026 00:41:39 +0000 (0:00:00.193) 0:00:27.669 ******** 2026-03-05 00:41:43.814848 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:41:43.814858 | orchestrator | 2026-03-05 00:41:43.814869 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-05 00:41:43.814880 | orchestrator | Thursday 05 March 2026 00:41:39 +0000 (0:00:00.208) 0:00:27.878 ******** 2026-03-05 00:41:43.814907 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:43.814918 | orchestrator | 2026-03-05 00:41:43.814929 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-05 00:41:43.814939 | orchestrator | Thursday 05 March 2026 00:41:39 +0000 (0:00:00.490) 0:00:28.369 ******** 2026-03-05 00:41:43.814950 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:43.814960 | orchestrator | 2026-03-05 00:41:43.814971 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-05 00:41:43.814981 | orchestrator | Thursday 05 March 2026 00:41:39 +0000 (0:00:00.187) 0:00:28.556 ******** 2026-03-05 00:41:43.814992 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:43.815010 | orchestrator | 2026-03-05 00:41:43.815021 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-05 00:41:43.815031 | orchestrator | Thursday 05 March 2026 00:41:40 +0000 (0:00:00.156) 0:00:28.713 ******** 2026-03-05 00:41:43.815042 | orchestrator | ok: [testbed-node-4] => { 2026-03-05 00:41:43.815052 | orchestrator |  "ceph_osd_devices": { 2026-03-05 00:41:43.815063 | orchestrator |  "sdb": { 2026-03-05 00:41:43.815074 | orchestrator |  "osd_lvm_uuid": "130794de-baff-5f0b-9c30-9a8206b73831" 2026-03-05 00:41:43.815108 | orchestrator |  }, 2026-03-05 00:41:43.815122 | orchestrator |  "sdc": { 2026-03-05 00:41:43.815132 | orchestrator |  "osd_lvm_uuid": "54671a7c-dad9-563e-9508-4448c9acfc6a" 2026-03-05 00:41:43.815143 | orchestrator |  } 2026-03-05 00:41:43.815154 | orchestrator |  } 2026-03-05 00:41:43.815165 | orchestrator | } 2026-03-05 00:41:43.815176 | orchestrator | 2026-03-05 00:41:43.815187 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-05 00:41:43.815198 | orchestrator | Thursday 05 March 2026 00:41:40 +0000 (0:00:00.163) 0:00:28.877 ******** 2026-03-05 00:41:43.815208 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:43.815219 | orchestrator | 2026-03-05 00:41:43.815230 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-05 00:41:43.815240 | orchestrator | Thursday 05 March 2026 00:41:40 +0000 (0:00:00.152) 0:00:29.030 ******** 2026-03-05 00:41:43.815251 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:43.815262 | orchestrator | 2026-03-05 00:41:43.815272 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-05 00:41:43.815283 | orchestrator | Thursday 05 March 2026 00:41:40 +0000 (0:00:00.141) 0:00:29.172 ******** 2026-03-05 00:41:43.815294 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:43.815304 | orchestrator | 2026-03-05 00:41:43.815315 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-05 00:41:43.815329 | orchestrator | Thursday 05 March 2026 00:41:40 +0000 (0:00:00.130) 0:00:29.303 ******** 2026-03-05 00:41:43.815347 | orchestrator | changed: [testbed-node-4] => { 2026-03-05 00:41:43.815365 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-05 00:41:43.815385 | orchestrator |  "ceph_osd_devices": { 2026-03-05 00:41:43.815396 | orchestrator |  "sdb": { 2026-03-05 00:41:43.815407 | orchestrator |  "osd_lvm_uuid": "130794de-baff-5f0b-9c30-9a8206b73831" 2026-03-05 00:41:43.815418 | orchestrator |  }, 2026-03-05 00:41:43.815429 | orchestrator |  "sdc": { 2026-03-05 00:41:43.815439 | orchestrator |  "osd_lvm_uuid": "54671a7c-dad9-563e-9508-4448c9acfc6a" 2026-03-05 00:41:43.815450 | orchestrator |  } 2026-03-05 00:41:43.815461 | orchestrator |  }, 2026-03-05 00:41:43.815472 | orchestrator |  "lvm_volumes": [ 2026-03-05 00:41:43.815482 | orchestrator |  { 2026-03-05 00:41:43.815493 | orchestrator |  "data": "osd-block-130794de-baff-5f0b-9c30-9a8206b73831", 2026-03-05 00:41:43.815504 | orchestrator |  "data_vg": "ceph-130794de-baff-5f0b-9c30-9a8206b73831" 2026-03-05 00:41:43.815514 | orchestrator |  }, 2026-03-05 00:41:43.815525 | orchestrator |  { 2026-03-05 00:41:43.815536 | orchestrator |  "data": "osd-block-54671a7c-dad9-563e-9508-4448c9acfc6a", 2026-03-05 00:41:43.815547 | orchestrator |  "data_vg": "ceph-54671a7c-dad9-563e-9508-4448c9acfc6a" 2026-03-05 00:41:43.815557 | orchestrator |  } 2026-03-05 00:41:43.815568 | orchestrator |  ] 2026-03-05 00:41:43.815579 | orchestrator |  } 2026-03-05 00:41:43.815589 | orchestrator | } 2026-03-05 00:41:43.815600 | orchestrator | 2026-03-05 00:41:43.815611 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-05 00:41:43.815622 | orchestrator | Thursday 05 March 2026 00:41:40 +0000 (0:00:00.259) 0:00:29.563 ******** 2026-03-05 00:41:43.815632 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-05 00:41:43.815644 | orchestrator | 2026-03-05 00:41:43.815662 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-05 00:41:43.815673 | orchestrator | 2026-03-05 00:41:43.815684 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-05 00:41:43.815695 | orchestrator | Thursday 05 March 2026 00:41:42 +0000 (0:00:01.726) 0:00:31.289 ******** 2026-03-05 00:41:43.815705 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-05 00:41:43.815716 | orchestrator | 2026-03-05 00:41:43.815727 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-05 00:41:43.815737 | orchestrator | Thursday 05 March 2026 00:41:43 +0000 (0:00:00.504) 0:00:31.793 ******** 2026-03-05 00:41:43.815748 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:41:43.815759 | orchestrator | 2026-03-05 00:41:43.815769 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:43.815780 | orchestrator | Thursday 05 March 2026 00:41:43 +0000 (0:00:00.218) 0:00:32.012 ******** 2026-03-05 00:41:43.815791 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-05 00:41:43.815801 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-05 00:41:43.815812 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-05 00:41:43.815823 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-05 00:41:43.815834 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-05 00:41:43.815852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-05 00:41:52.207942 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-05 00:41:52.208038 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-05 00:41:52.208050 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-05 00:41:52.208059 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-05 00:41:52.208108 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-05 00:41:52.208125 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-05 00:41:52.208135 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-05 00:41:52.208144 | orchestrator | 2026-03-05 00:41:52.208154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:52.208164 | orchestrator | Thursday 05 March 2026 00:41:43 +0000 (0:00:00.463) 0:00:32.475 ******** 2026-03-05 00:41:52.208173 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:52.208182 | orchestrator | 2026-03-05 00:41:52.208191 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:52.208200 | orchestrator | Thursday 05 March 2026 00:41:44 +0000 (0:00:00.161) 0:00:32.637 ******** 2026-03-05 00:41:52.208208 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:52.208216 | orchestrator | 2026-03-05 00:41:52.208225 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:52.208233 | orchestrator | Thursday 05 March 2026 00:41:44 +0000 (0:00:00.161) 0:00:32.798 ******** 2026-03-05 00:41:52.208242 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:52.208250 | orchestrator | 2026-03-05 00:41:52.208259 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:52.208267 | orchestrator | Thursday 05 March 2026 00:41:44 +0000 (0:00:00.159) 0:00:32.957 ******** 2026-03-05 00:41:52.208280 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:52.208289 | orchestrator | 2026-03-05 00:41:52.208297 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:52.208306 | orchestrator | Thursday 05 March 2026 00:41:44 +0000 (0:00:00.156) 0:00:33.114 ******** 2026-03-05 00:41:52.208332 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:52.208341 | orchestrator | 2026-03-05 00:41:52.208349 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:52.208358 | orchestrator | Thursday 05 March 2026 00:41:44 +0000 (0:00:00.165) 0:00:33.280 ******** 2026-03-05 00:41:52.208366 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:52.208375 | orchestrator | 2026-03-05 00:41:52.208383 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:52.208392 | orchestrator | Thursday 05 March 2026 00:41:44 +0000 (0:00:00.170) 0:00:33.450 ******** 2026-03-05 00:41:52.208400 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:52.208409 | orchestrator | 2026-03-05 00:41:52.208418 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:52.208427 | orchestrator | Thursday 05 March 2026 00:41:45 +0000 (0:00:00.165) 0:00:33.616 ******** 2026-03-05 00:41:52.208435 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:52.208444 | orchestrator | 2026-03-05 00:41:52.208452 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:52.208461 | orchestrator | Thursday 05 March 2026 00:41:45 +0000 (0:00:00.187) 0:00:33.803 ******** 2026-03-05 00:41:52.208469 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68) 2026-03-05 00:41:52.208479 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68) 2026-03-05 00:41:52.208487 | orchestrator | 2026-03-05 00:41:52.208496 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:52.208504 | orchestrator | Thursday 05 March 2026 00:41:46 +0000 (0:00:00.991) 0:00:34.794 ******** 2026-03-05 00:41:52.208513 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_51d29519-c1f9-43c2-8da2-810d6ee2cf1d) 2026-03-05 00:41:52.208521 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_51d29519-c1f9-43c2-8da2-810d6ee2cf1d) 2026-03-05 00:41:52.208530 | orchestrator | 2026-03-05 00:41:52.208538 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:52.208547 | orchestrator | Thursday 05 March 2026 00:41:46 +0000 (0:00:00.429) 0:00:35.224 ******** 2026-03-05 00:41:52.208555 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_01b35dfc-cc13-430f-9521-065aaefb7085) 2026-03-05 00:41:52.208564 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_01b35dfc-cc13-430f-9521-065aaefb7085) 2026-03-05 00:41:52.208573 | orchestrator | 2026-03-05 00:41:52.208581 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:52.208590 | orchestrator | Thursday 05 March 2026 00:41:47 +0000 (0:00:00.479) 0:00:35.703 ******** 2026-03-05 00:41:52.208598 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f3d47084-7273-4e4c-b048-5cf25f7ffc67) 2026-03-05 00:41:52.208607 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f3d47084-7273-4e4c-b048-5cf25f7ffc67) 2026-03-05 00:41:52.208615 | orchestrator | 2026-03-05 00:41:52.208624 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:41:52.208632 | orchestrator | Thursday 05 March 2026 00:41:47 +0000 (0:00:00.467) 0:00:36.171 ******** 2026-03-05 00:41:52.208641 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-05 00:41:52.208649 | orchestrator | 2026-03-05 00:41:52.208658 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:52.208681 | orchestrator | Thursday 05 March 2026 00:41:47 +0000 (0:00:00.368) 0:00:36.539 ******** 2026-03-05 00:41:52.208690 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-05 00:41:52.208699 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-05 00:41:52.208708 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-05 00:41:52.208717 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-05 00:41:52.208731 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-05 00:41:52.208740 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-05 00:41:52.208748 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-05 00:41:52.208756 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-05 00:41:52.208765 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-05 00:41:52.208773 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-05 00:41:52.208781 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-05 00:41:52.208790 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-05 00:41:52.208798 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-05 00:41:52.208807 | orchestrator | 2026-03-05 00:41:52.208815 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:52.208824 | orchestrator | Thursday 05 March 2026 00:41:48 +0000 (0:00:00.406) 0:00:36.945 ******** 2026-03-05 00:41:52.208832 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:52.208841 | orchestrator | 2026-03-05 00:41:52.208849 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:52.208858 | orchestrator | Thursday 05 March 2026 00:41:48 +0000 (0:00:00.222) 0:00:37.168 ******** 2026-03-05 00:41:52.208867 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:52.208875 | orchestrator | 2026-03-05 00:41:52.208884 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:52.208892 | orchestrator | Thursday 05 March 2026 00:41:48 +0000 (0:00:00.238) 0:00:37.406 ******** 2026-03-05 00:41:52.208901 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:52.208909 | orchestrator | 2026-03-05 00:41:52.208918 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:52.208931 | orchestrator | Thursday 05 March 2026 00:41:49 +0000 (0:00:00.235) 0:00:37.642 ******** 2026-03-05 00:41:52.208940 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:52.208948 | orchestrator | 2026-03-05 00:41:52.208957 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:52.208965 | orchestrator | Thursday 05 March 2026 00:41:49 +0000 (0:00:00.201) 0:00:37.844 ******** 2026-03-05 00:41:52.208974 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:52.208982 | orchestrator | 2026-03-05 00:41:52.208991 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:52.208999 | orchestrator | Thursday 05 March 2026 00:41:49 +0000 (0:00:00.196) 0:00:38.040 ******** 2026-03-05 00:41:52.209008 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:52.209016 | orchestrator | 2026-03-05 00:41:52.209025 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:52.209033 | orchestrator | Thursday 05 March 2026 00:41:50 +0000 (0:00:00.698) 0:00:38.739 ******** 2026-03-05 00:41:52.209042 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:52.209050 | orchestrator | 2026-03-05 00:41:52.209059 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:52.209067 | orchestrator | Thursday 05 March 2026 00:41:50 +0000 (0:00:00.276) 0:00:39.015 ******** 2026-03-05 00:41:52.209076 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:52.209101 | orchestrator | 2026-03-05 00:41:52.209111 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:52.209120 | orchestrator | Thursday 05 March 2026 00:41:50 +0000 (0:00:00.260) 0:00:39.276 ******** 2026-03-05 00:41:52.209129 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-05 00:41:52.209143 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-05 00:41:52.209152 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-05 00:41:52.209160 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-05 00:41:52.209169 | orchestrator | 2026-03-05 00:41:52.209177 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:52.209186 | orchestrator | Thursday 05 March 2026 00:41:51 +0000 (0:00:00.656) 0:00:39.932 ******** 2026-03-05 00:41:52.209195 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:52.209203 | orchestrator | 2026-03-05 00:41:52.209212 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:52.209220 | orchestrator | Thursday 05 March 2026 00:41:51 +0000 (0:00:00.222) 0:00:40.155 ******** 2026-03-05 00:41:52.209229 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:52.209237 | orchestrator | 2026-03-05 00:41:52.209246 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:52.209254 | orchestrator | Thursday 05 March 2026 00:41:51 +0000 (0:00:00.193) 0:00:40.349 ******** 2026-03-05 00:41:52.209263 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:52.209271 | orchestrator | 2026-03-05 00:41:52.209280 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:41:52.209288 | orchestrator | Thursday 05 March 2026 00:41:51 +0000 (0:00:00.221) 0:00:40.570 ******** 2026-03-05 00:41:52.209297 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:52.209305 | orchestrator | 2026-03-05 00:41:52.209318 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-05 00:41:56.746821 | orchestrator | Thursday 05 March 2026 00:41:52 +0000 (0:00:00.209) 0:00:40.780 ******** 2026-03-05 00:41:56.746920 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-05 00:41:56.746934 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-05 00:41:56.746946 | orchestrator | 2026-03-05 00:41:56.746958 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-05 00:41:56.746969 | orchestrator | Thursday 05 March 2026 00:41:52 +0000 (0:00:00.179) 0:00:40.960 ******** 2026-03-05 00:41:56.746980 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:56.746991 | orchestrator | 2026-03-05 00:41:56.747002 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-05 00:41:56.747012 | orchestrator | Thursday 05 March 2026 00:41:52 +0000 (0:00:00.136) 0:00:41.097 ******** 2026-03-05 00:41:56.747023 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:56.747034 | orchestrator | 2026-03-05 00:41:56.747044 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-05 00:41:56.747055 | orchestrator | Thursday 05 March 2026 00:41:52 +0000 (0:00:00.127) 0:00:41.224 ******** 2026-03-05 00:41:56.747065 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:56.747076 | orchestrator | 2026-03-05 00:41:56.747113 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-05 00:41:56.747124 | orchestrator | Thursday 05 March 2026 00:41:53 +0000 (0:00:00.371) 0:00:41.595 ******** 2026-03-05 00:41:56.747135 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:41:56.747147 | orchestrator | 2026-03-05 00:41:56.747158 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-05 00:41:56.747168 | orchestrator | Thursday 05 March 2026 00:41:53 +0000 (0:00:00.148) 0:00:41.744 ******** 2026-03-05 00:41:56.747179 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'}}) 2026-03-05 00:41:56.747191 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '56dff28b-2239-50bc-bb4f-66f9aa80ba88'}}) 2026-03-05 00:41:56.747201 | orchestrator | 2026-03-05 00:41:56.747212 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-05 00:41:56.747224 | orchestrator | Thursday 05 March 2026 00:41:53 +0000 (0:00:00.183) 0:00:41.927 ******** 2026-03-05 00:41:56.747235 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'}})  2026-03-05 00:41:56.747273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '56dff28b-2239-50bc-bb4f-66f9aa80ba88'}})  2026-03-05 00:41:56.747284 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:56.747295 | orchestrator | 2026-03-05 00:41:56.747306 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-05 00:41:56.747317 | orchestrator | Thursday 05 March 2026 00:41:53 +0000 (0:00:00.168) 0:00:42.096 ******** 2026-03-05 00:41:56.747336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'}})  2026-03-05 00:41:56.747355 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '56dff28b-2239-50bc-bb4f-66f9aa80ba88'}})  2026-03-05 00:41:56.747385 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:56.747407 | orchestrator | 2026-03-05 00:41:56.747428 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-05 00:41:56.747449 | orchestrator | Thursday 05 March 2026 00:41:53 +0000 (0:00:00.156) 0:00:42.253 ******** 2026-03-05 00:41:56.747469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'}})  2026-03-05 00:41:56.747491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '56dff28b-2239-50bc-bb4f-66f9aa80ba88'}})  2026-03-05 00:41:56.747511 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:56.747532 | orchestrator | 2026-03-05 00:41:56.747553 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-05 00:41:56.747574 | orchestrator | Thursday 05 March 2026 00:41:53 +0000 (0:00:00.156) 0:00:42.409 ******** 2026-03-05 00:41:56.747595 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:41:56.747614 | orchestrator | 2026-03-05 00:41:56.747627 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-05 00:41:56.747640 | orchestrator | Thursday 05 March 2026 00:41:53 +0000 (0:00:00.144) 0:00:42.553 ******** 2026-03-05 00:41:56.747652 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:41:56.747665 | orchestrator | 2026-03-05 00:41:56.747678 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-05 00:41:56.747690 | orchestrator | Thursday 05 March 2026 00:41:54 +0000 (0:00:00.149) 0:00:42.703 ******** 2026-03-05 00:41:56.747703 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:56.747715 | orchestrator | 2026-03-05 00:41:56.747727 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-05 00:41:56.747738 | orchestrator | Thursday 05 March 2026 00:41:54 +0000 (0:00:00.134) 0:00:42.838 ******** 2026-03-05 00:41:56.747749 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:56.747759 | orchestrator | 2026-03-05 00:41:56.747770 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-05 00:41:56.747780 | orchestrator | Thursday 05 March 2026 00:41:54 +0000 (0:00:00.157) 0:00:42.995 ******** 2026-03-05 00:41:56.747791 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:56.747801 | orchestrator | 2026-03-05 00:41:56.747814 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-05 00:41:56.747831 | orchestrator | Thursday 05 March 2026 00:41:54 +0000 (0:00:00.133) 0:00:43.129 ******** 2026-03-05 00:41:56.747857 | orchestrator | ok: [testbed-node-5] => { 2026-03-05 00:41:56.747879 | orchestrator |  "ceph_osd_devices": { 2026-03-05 00:41:56.747895 | orchestrator |  "sdb": { 2026-03-05 00:41:56.747936 | orchestrator |  "osd_lvm_uuid": "7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15" 2026-03-05 00:41:56.747953 | orchestrator |  }, 2026-03-05 00:41:56.747971 | orchestrator |  "sdc": { 2026-03-05 00:41:56.748012 | orchestrator |  "osd_lvm_uuid": "56dff28b-2239-50bc-bb4f-66f9aa80ba88" 2026-03-05 00:41:56.748033 | orchestrator |  } 2026-03-05 00:41:56.748051 | orchestrator |  } 2026-03-05 00:41:56.748063 | orchestrator | } 2026-03-05 00:41:56.748074 | orchestrator | 2026-03-05 00:41:56.748142 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-05 00:41:56.748155 | orchestrator | Thursday 05 March 2026 00:41:54 +0000 (0:00:00.148) 0:00:43.278 ******** 2026-03-05 00:41:56.748166 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:56.748177 | orchestrator | 2026-03-05 00:41:56.748187 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-05 00:41:56.748198 | orchestrator | Thursday 05 March 2026 00:41:55 +0000 (0:00:00.365) 0:00:43.643 ******** 2026-03-05 00:41:56.748209 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:56.748219 | orchestrator | 2026-03-05 00:41:56.748230 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-05 00:41:56.748241 | orchestrator | Thursday 05 March 2026 00:41:55 +0000 (0:00:00.170) 0:00:43.813 ******** 2026-03-05 00:41:56.748251 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:56.748262 | orchestrator | 2026-03-05 00:41:56.748272 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-05 00:41:56.748283 | orchestrator | Thursday 05 March 2026 00:41:55 +0000 (0:00:00.146) 0:00:43.960 ******** 2026-03-05 00:41:56.748296 | orchestrator | changed: [testbed-node-5] => { 2026-03-05 00:41:56.748315 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-05 00:41:56.748333 | orchestrator |  "ceph_osd_devices": { 2026-03-05 00:41:56.748351 | orchestrator |  "sdb": { 2026-03-05 00:41:56.748376 | orchestrator |  "osd_lvm_uuid": "7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15" 2026-03-05 00:41:56.748396 | orchestrator |  }, 2026-03-05 00:41:56.748414 | orchestrator |  "sdc": { 2026-03-05 00:41:56.748438 | orchestrator |  "osd_lvm_uuid": "56dff28b-2239-50bc-bb4f-66f9aa80ba88" 2026-03-05 00:41:56.748458 | orchestrator |  } 2026-03-05 00:41:56.748485 | orchestrator |  }, 2026-03-05 00:41:56.748505 | orchestrator |  "lvm_volumes": [ 2026-03-05 00:41:56.748522 | orchestrator |  { 2026-03-05 00:41:56.748540 | orchestrator |  "data": "osd-block-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15", 2026-03-05 00:41:56.748560 | orchestrator |  "data_vg": "ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15" 2026-03-05 00:41:56.748579 | orchestrator |  }, 2026-03-05 00:41:56.748604 | orchestrator |  { 2026-03-05 00:41:56.748617 | orchestrator |  "data": "osd-block-56dff28b-2239-50bc-bb4f-66f9aa80ba88", 2026-03-05 00:41:56.748628 | orchestrator |  "data_vg": "ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88" 2026-03-05 00:41:56.748639 | orchestrator |  } 2026-03-05 00:41:56.748650 | orchestrator |  ] 2026-03-05 00:41:56.748660 | orchestrator |  } 2026-03-05 00:41:56.748671 | orchestrator | } 2026-03-05 00:41:56.748682 | orchestrator | 2026-03-05 00:41:56.748693 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-05 00:41:56.748704 | orchestrator | Thursday 05 March 2026 00:41:55 +0000 (0:00:00.224) 0:00:44.184 ******** 2026-03-05 00:41:56.748714 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-05 00:41:56.748725 | orchestrator | 2026-03-05 00:41:56.748736 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:41:56.748747 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-05 00:41:56.748759 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-05 00:41:56.748770 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-05 00:41:56.748781 | orchestrator | 2026-03-05 00:41:56.748792 | orchestrator | 2026-03-05 00:41:56.748803 | orchestrator | 2026-03-05 00:41:56.748814 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:41:56.748824 | orchestrator | Thursday 05 March 2026 00:41:56 +0000 (0:00:01.121) 0:00:45.306 ******** 2026-03-05 00:41:56.748846 | orchestrator | =============================================================================== 2026-03-05 00:41:56.748856 | orchestrator | Write configuration file ------------------------------------------------ 4.80s 2026-03-05 00:41:56.748867 | orchestrator | Add known links to the list of available block devices ------------------ 1.38s 2026-03-05 00:41:56.748878 | orchestrator | Add known partitions to the list of available block devices ------------- 1.16s 2026-03-05 00:41:56.748888 | orchestrator | Add known partitions to the list of available block devices ------------- 1.09s 2026-03-05 00:41:56.748899 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.02s 2026-03-05 00:41:56.748909 | orchestrator | Add known links to the list of available block devices ------------------ 0.99s 2026-03-05 00:41:56.748920 | orchestrator | Print configuration data ------------------------------------------------ 0.98s 2026-03-05 00:41:56.748931 | orchestrator | Add known links to the list of available block devices ------------------ 0.91s 2026-03-05 00:41:56.748941 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2026-03-05 00:41:56.748952 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2026-03-05 00:41:56.748963 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.78s 2026-03-05 00:41:56.748981 | orchestrator | Set DB devices config data ---------------------------------------------- 0.77s 2026-03-05 00:41:56.749008 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2026-03-05 00:41:56.749043 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2026-03-05 00:41:57.214975 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.69s 2026-03-05 00:41:57.215151 | orchestrator | Get initial list of available block devices ----------------------------- 0.68s 2026-03-05 00:41:57.215180 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-03-05 00:41:57.215197 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2026-03-05 00:41:57.215208 | orchestrator | Print WAL devices ------------------------------------------------------- 0.66s 2026-03-05 00:41:57.215219 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-03-05 00:42:20.205333 | orchestrator | 2026-03-05 00:42:20 | INFO  | Task 8813f912-8ce7-42b2-bfbd-e7fd660b50c0 (sync inventory) is running in background. Output coming soon. 2026-03-05 00:42:46.769714 | orchestrator | 2026-03-05 00:42:22 | INFO  | Starting group_vars file reorganization 2026-03-05 00:42:46.769817 | orchestrator | 2026-03-05 00:42:22 | INFO  | Moved 0 file(s) to their respective directories 2026-03-05 00:42:46.769842 | orchestrator | 2026-03-05 00:42:22 | INFO  | Group_vars file reorganization completed 2026-03-05 00:42:46.769862 | orchestrator | 2026-03-05 00:42:24 | INFO  | Starting variable preparation from inventory 2026-03-05 00:42:46.769882 | orchestrator | 2026-03-05 00:42:27 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-05 00:42:46.769894 | orchestrator | 2026-03-05 00:42:27 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-05 00:42:46.769904 | orchestrator | 2026-03-05 00:42:27 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-05 00:42:46.769915 | orchestrator | 2026-03-05 00:42:27 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-05 00:42:46.769926 | orchestrator | 2026-03-05 00:42:27 | INFO  | Variable preparation completed 2026-03-05 00:42:46.769937 | orchestrator | 2026-03-05 00:42:29 | INFO  | Starting inventory overwrite handling 2026-03-05 00:42:46.769948 | orchestrator | 2026-03-05 00:42:29 | INFO  | Handling group overwrites in 99-overwrite 2026-03-05 00:42:46.769958 | orchestrator | 2026-03-05 00:42:29 | INFO  | Removing group frr:children from 60-generic 2026-03-05 00:42:46.769993 | orchestrator | 2026-03-05 00:42:29 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-05 00:42:46.770005 | orchestrator | 2026-03-05 00:42:29 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-05 00:42:46.770067 | orchestrator | 2026-03-05 00:42:29 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-05 00:42:46.770080 | orchestrator | 2026-03-05 00:42:29 | INFO  | Handling group overwrites in 20-roles 2026-03-05 00:42:46.770117 | orchestrator | 2026-03-05 00:42:29 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-05 00:42:46.770130 | orchestrator | 2026-03-05 00:42:29 | INFO  | Removed 5 group(s) in total 2026-03-05 00:42:46.770140 | orchestrator | 2026-03-05 00:42:29 | INFO  | Inventory overwrite handling completed 2026-03-05 00:42:46.770151 | orchestrator | 2026-03-05 00:42:30 | INFO  | Starting merge of inventory files 2026-03-05 00:42:46.770162 | orchestrator | 2026-03-05 00:42:30 | INFO  | Inventory files merged successfully 2026-03-05 00:42:46.770172 | orchestrator | 2026-03-05 00:42:35 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-05 00:42:46.770183 | orchestrator | 2026-03-05 00:42:45 | INFO  | Successfully wrote ClusterShell configuration 2026-03-05 00:42:46.770194 | orchestrator | [master 484152e] 2026-03-05-00-42 2026-03-05 00:42:46.770205 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-05 00:42:49.080138 | orchestrator | 2026-03-05 00:42:49 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-03-05 00:42:49.151869 | orchestrator | 2026-03-05 00:42:49 | INFO  | Task b31bb776-addc-46a6-a992-000eac468cc5 (ceph-create-lvm-devices) was prepared for execution. 2026-03-05 00:42:49.151969 | orchestrator | 2026-03-05 00:42:49 | INFO  | It takes a moment until task b31bb776-addc-46a6-a992-000eac468cc5 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-05 00:43:02.982431 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-05 00:43:02.982577 | orchestrator | 2.16.14 2026-03-05 00:43:02.982589 | orchestrator | 2026-03-05 00:43:02.982597 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-05 00:43:02.982606 | orchestrator | 2026-03-05 00:43:02.982613 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-05 00:43:02.982620 | orchestrator | Thursday 05 March 2026 00:42:54 +0000 (0:00:00.343) 0:00:00.343 ******** 2026-03-05 00:43:02.982628 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-05 00:43:02.982635 | orchestrator | 2026-03-05 00:43:02.982642 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-05 00:43:02.982649 | orchestrator | Thursday 05 March 2026 00:42:54 +0000 (0:00:00.260) 0:00:00.603 ******** 2026-03-05 00:43:02.982656 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:43:02.982663 | orchestrator | 2026-03-05 00:43:02.982669 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:02.982675 | orchestrator | Thursday 05 March 2026 00:42:55 +0000 (0:00:00.286) 0:00:00.890 ******** 2026-03-05 00:43:02.982682 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-05 00:43:02.982688 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-05 00:43:02.982694 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-05 00:43:02.982701 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-05 00:43:02.982707 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-05 00:43:02.982713 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-05 00:43:02.982719 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-05 00:43:02.982744 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-05 00:43:02.982750 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-05 00:43:02.982756 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-05 00:43:02.982763 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-05 00:43:02.982769 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-05 00:43:02.982787 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-05 00:43:02.982794 | orchestrator | 2026-03-05 00:43:02.982800 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:02.982806 | orchestrator | Thursday 05 March 2026 00:42:55 +0000 (0:00:00.549) 0:00:01.439 ******** 2026-03-05 00:43:02.982813 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:02.982819 | orchestrator | 2026-03-05 00:43:02.982825 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:02.982832 | orchestrator | Thursday 05 March 2026 00:42:56 +0000 (0:00:00.213) 0:00:01.653 ******** 2026-03-05 00:43:02.982838 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:02.982844 | orchestrator | 2026-03-05 00:43:02.982851 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:02.982857 | orchestrator | Thursday 05 March 2026 00:42:56 +0000 (0:00:00.219) 0:00:01.872 ******** 2026-03-05 00:43:02.982863 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:02.982869 | orchestrator | 2026-03-05 00:43:02.982876 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:02.982882 | orchestrator | Thursday 05 March 2026 00:42:56 +0000 (0:00:00.197) 0:00:02.069 ******** 2026-03-05 00:43:02.982888 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:02.982894 | orchestrator | 2026-03-05 00:43:02.982901 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:02.982907 | orchestrator | Thursday 05 March 2026 00:42:56 +0000 (0:00:00.290) 0:00:02.359 ******** 2026-03-05 00:43:02.982914 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:02.982924 | orchestrator | 2026-03-05 00:43:02.982934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:02.982950 | orchestrator | Thursday 05 March 2026 00:42:56 +0000 (0:00:00.229) 0:00:02.588 ******** 2026-03-05 00:43:02.982962 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:02.982972 | orchestrator | 2026-03-05 00:43:02.982983 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:02.982993 | orchestrator | Thursday 05 March 2026 00:42:57 +0000 (0:00:00.201) 0:00:02.790 ******** 2026-03-05 00:43:02.983004 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:02.983014 | orchestrator | 2026-03-05 00:43:02.983025 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:02.983037 | orchestrator | Thursday 05 March 2026 00:42:57 +0000 (0:00:00.234) 0:00:03.024 ******** 2026-03-05 00:43:02.983047 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:02.983058 | orchestrator | 2026-03-05 00:43:02.983069 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:02.983078 | orchestrator | Thursday 05 March 2026 00:42:57 +0000 (0:00:00.218) 0:00:03.243 ******** 2026-03-05 00:43:02.983114 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1) 2026-03-05 00:43:02.983123 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1) 2026-03-05 00:43:02.983130 | orchestrator | 2026-03-05 00:43:02.983138 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:02.983159 | orchestrator | Thursday 05 March 2026 00:42:58 +0000 (0:00:00.580) 0:00:03.824 ******** 2026-03-05 00:43:02.983189 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_46af06ac-e806-45b3-baa6-786374d24d95) 2026-03-05 00:43:02.983197 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_46af06ac-e806-45b3-baa6-786374d24d95) 2026-03-05 00:43:02.983204 | orchestrator | 2026-03-05 00:43:02.983212 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:02.983219 | orchestrator | Thursday 05 March 2026 00:42:58 +0000 (0:00:00.680) 0:00:04.505 ******** 2026-03-05 00:43:02.983226 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_94265553-26b7-47c9-a922-5463d2be5f34) 2026-03-05 00:43:02.983234 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_94265553-26b7-47c9-a922-5463d2be5f34) 2026-03-05 00:43:02.983241 | orchestrator | 2026-03-05 00:43:02.983248 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:02.983256 | orchestrator | Thursday 05 March 2026 00:42:59 +0000 (0:00:00.887) 0:00:05.393 ******** 2026-03-05 00:43:02.983263 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_db99048b-c1ef-4f9e-82d3-cd84d3f63e80) 2026-03-05 00:43:02.983271 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_db99048b-c1ef-4f9e-82d3-cd84d3f63e80) 2026-03-05 00:43:02.983278 | orchestrator | 2026-03-05 00:43:02.983286 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:02.983293 | orchestrator | Thursday 05 March 2026 00:43:00 +0000 (0:00:00.895) 0:00:06.288 ******** 2026-03-05 00:43:02.983300 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-05 00:43:02.983308 | orchestrator | 2026-03-05 00:43:02.983315 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:02.983323 | orchestrator | Thursday 05 March 2026 00:43:01 +0000 (0:00:00.349) 0:00:06.638 ******** 2026-03-05 00:43:02.983330 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-05 00:43:02.983337 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-05 00:43:02.983343 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-05 00:43:02.983349 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-05 00:43:02.983355 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-05 00:43:02.983361 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-05 00:43:02.983367 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-05 00:43:02.983374 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-05 00:43:02.983380 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-05 00:43:02.983386 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-05 00:43:02.983392 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-05 00:43:02.983398 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-05 00:43:02.983404 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-05 00:43:02.983411 | orchestrator | 2026-03-05 00:43:02.983417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:02.983423 | orchestrator | Thursday 05 March 2026 00:43:01 +0000 (0:00:00.452) 0:00:07.091 ******** 2026-03-05 00:43:02.983429 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:02.983435 | orchestrator | 2026-03-05 00:43:02.983441 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:02.983448 | orchestrator | Thursday 05 March 2026 00:43:01 +0000 (0:00:00.248) 0:00:07.340 ******** 2026-03-05 00:43:02.983459 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:02.983465 | orchestrator | 2026-03-05 00:43:02.983471 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:02.983478 | orchestrator | Thursday 05 March 2026 00:43:01 +0000 (0:00:00.243) 0:00:07.583 ******** 2026-03-05 00:43:02.983484 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:02.983490 | orchestrator | 2026-03-05 00:43:02.983496 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:02.983502 | orchestrator | Thursday 05 March 2026 00:43:02 +0000 (0:00:00.204) 0:00:07.788 ******** 2026-03-05 00:43:02.983508 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:02.983514 | orchestrator | 2026-03-05 00:43:02.983521 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:02.983527 | orchestrator | Thursday 05 March 2026 00:43:02 +0000 (0:00:00.197) 0:00:07.986 ******** 2026-03-05 00:43:02.983533 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:02.983539 | orchestrator | 2026-03-05 00:43:02.983545 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:02.983558 | orchestrator | Thursday 05 March 2026 00:43:02 +0000 (0:00:00.213) 0:00:08.199 ******** 2026-03-05 00:43:02.983564 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:02.983571 | orchestrator | 2026-03-05 00:43:02.983577 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:02.983583 | orchestrator | Thursday 05 March 2026 00:43:02 +0000 (0:00:00.187) 0:00:08.386 ******** 2026-03-05 00:43:02.983589 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:02.983595 | orchestrator | 2026-03-05 00:43:02.983605 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:11.158809 | orchestrator | Thursday 05 March 2026 00:43:02 +0000 (0:00:00.223) 0:00:08.609 ******** 2026-03-05 00:43:11.158918 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:11.158935 | orchestrator | 2026-03-05 00:43:11.158949 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:11.158961 | orchestrator | Thursday 05 March 2026 00:43:03 +0000 (0:00:00.212) 0:00:08.821 ******** 2026-03-05 00:43:11.158973 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-05 00:43:11.158984 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-05 00:43:11.158996 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-05 00:43:11.159007 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-05 00:43:11.159018 | orchestrator | 2026-03-05 00:43:11.159029 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:11.159040 | orchestrator | Thursday 05 March 2026 00:43:04 +0000 (0:00:01.088) 0:00:09.910 ******** 2026-03-05 00:43:11.159051 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:11.159062 | orchestrator | 2026-03-05 00:43:11.159133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:11.159146 | orchestrator | Thursday 05 March 2026 00:43:04 +0000 (0:00:00.233) 0:00:10.143 ******** 2026-03-05 00:43:11.159157 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:11.159168 | orchestrator | 2026-03-05 00:43:11.159179 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:11.159190 | orchestrator | Thursday 05 March 2026 00:43:04 +0000 (0:00:00.232) 0:00:10.375 ******** 2026-03-05 00:43:11.159201 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:11.159212 | orchestrator | 2026-03-05 00:43:11.159223 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:11.159234 | orchestrator | Thursday 05 March 2026 00:43:04 +0000 (0:00:00.196) 0:00:10.572 ******** 2026-03-05 00:43:11.159245 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:11.159256 | orchestrator | 2026-03-05 00:43:11.159267 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-05 00:43:11.159277 | orchestrator | Thursday 05 March 2026 00:43:05 +0000 (0:00:00.197) 0:00:10.769 ******** 2026-03-05 00:43:11.159288 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:11.159324 | orchestrator | 2026-03-05 00:43:11.159337 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-05 00:43:11.159350 | orchestrator | Thursday 05 March 2026 00:43:05 +0000 (0:00:00.133) 0:00:10.902 ******** 2026-03-05 00:43:11.159363 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f88409fd-5147-5194-8288-2488b5e44352'}}) 2026-03-05 00:43:11.159376 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9d6733ad-9ad8-5bce-b749-e645aedee181'}}) 2026-03-05 00:43:11.159389 | orchestrator | 2026-03-05 00:43:11.159417 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-05 00:43:11.159430 | orchestrator | Thursday 05 March 2026 00:43:05 +0000 (0:00:00.192) 0:00:11.094 ******** 2026-03-05 00:43:11.159444 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f88409fd-5147-5194-8288-2488b5e44352', 'data_vg': 'ceph-f88409fd-5147-5194-8288-2488b5e44352'}) 2026-03-05 00:43:11.159459 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9d6733ad-9ad8-5bce-b749-e645aedee181', 'data_vg': 'ceph-9d6733ad-9ad8-5bce-b749-e645aedee181'}) 2026-03-05 00:43:11.159472 | orchestrator | 2026-03-05 00:43:11.159484 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-05 00:43:11.159497 | orchestrator | Thursday 05 March 2026 00:43:07 +0000 (0:00:01.989) 0:00:13.084 ******** 2026-03-05 00:43:11.159510 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f88409fd-5147-5194-8288-2488b5e44352', 'data_vg': 'ceph-f88409fd-5147-5194-8288-2488b5e44352'})  2026-03-05 00:43:11.159525 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9d6733ad-9ad8-5bce-b749-e645aedee181', 'data_vg': 'ceph-9d6733ad-9ad8-5bce-b749-e645aedee181'})  2026-03-05 00:43:11.159538 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:11.159551 | orchestrator | 2026-03-05 00:43:11.159564 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-05 00:43:11.159577 | orchestrator | Thursday 05 March 2026 00:43:07 +0000 (0:00:00.161) 0:00:13.245 ******** 2026-03-05 00:43:11.159588 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f88409fd-5147-5194-8288-2488b5e44352', 'data_vg': 'ceph-f88409fd-5147-5194-8288-2488b5e44352'}) 2026-03-05 00:43:11.159599 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9d6733ad-9ad8-5bce-b749-e645aedee181', 'data_vg': 'ceph-9d6733ad-9ad8-5bce-b749-e645aedee181'}) 2026-03-05 00:43:11.159610 | orchestrator | 2026-03-05 00:43:11.159620 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-05 00:43:11.159631 | orchestrator | Thursday 05 March 2026 00:43:09 +0000 (0:00:01.446) 0:00:14.692 ******** 2026-03-05 00:43:11.159642 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f88409fd-5147-5194-8288-2488b5e44352', 'data_vg': 'ceph-f88409fd-5147-5194-8288-2488b5e44352'})  2026-03-05 00:43:11.159653 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9d6733ad-9ad8-5bce-b749-e645aedee181', 'data_vg': 'ceph-9d6733ad-9ad8-5bce-b749-e645aedee181'})  2026-03-05 00:43:11.159664 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:11.159674 | orchestrator | 2026-03-05 00:43:11.159685 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-05 00:43:11.159696 | orchestrator | Thursday 05 March 2026 00:43:09 +0000 (0:00:00.173) 0:00:14.865 ******** 2026-03-05 00:43:11.159724 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:11.159736 | orchestrator | 2026-03-05 00:43:11.159747 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-05 00:43:11.159758 | orchestrator | Thursday 05 March 2026 00:43:09 +0000 (0:00:00.153) 0:00:15.019 ******** 2026-03-05 00:43:11.159769 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f88409fd-5147-5194-8288-2488b5e44352', 'data_vg': 'ceph-f88409fd-5147-5194-8288-2488b5e44352'})  2026-03-05 00:43:11.159780 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9d6733ad-9ad8-5bce-b749-e645aedee181', 'data_vg': 'ceph-9d6733ad-9ad8-5bce-b749-e645aedee181'})  2026-03-05 00:43:11.159800 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:11.159811 | orchestrator | 2026-03-05 00:43:11.159822 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-05 00:43:11.159832 | orchestrator | Thursday 05 March 2026 00:43:09 +0000 (0:00:00.385) 0:00:15.404 ******** 2026-03-05 00:43:11.159843 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:11.159853 | orchestrator | 2026-03-05 00:43:11.159864 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-05 00:43:11.159875 | orchestrator | Thursday 05 March 2026 00:43:09 +0000 (0:00:00.140) 0:00:15.545 ******** 2026-03-05 00:43:11.159886 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f88409fd-5147-5194-8288-2488b5e44352', 'data_vg': 'ceph-f88409fd-5147-5194-8288-2488b5e44352'})  2026-03-05 00:43:11.159897 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9d6733ad-9ad8-5bce-b749-e645aedee181', 'data_vg': 'ceph-9d6733ad-9ad8-5bce-b749-e645aedee181'})  2026-03-05 00:43:11.159908 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:11.159918 | orchestrator | 2026-03-05 00:43:11.159929 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-05 00:43:11.159940 | orchestrator | Thursday 05 March 2026 00:43:10 +0000 (0:00:00.169) 0:00:15.714 ******** 2026-03-05 00:43:11.159950 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:11.159961 | orchestrator | 2026-03-05 00:43:11.159972 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-05 00:43:11.159983 | orchestrator | Thursday 05 March 2026 00:43:10 +0000 (0:00:00.138) 0:00:15.852 ******** 2026-03-05 00:43:11.159993 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f88409fd-5147-5194-8288-2488b5e44352', 'data_vg': 'ceph-f88409fd-5147-5194-8288-2488b5e44352'})  2026-03-05 00:43:11.160004 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9d6733ad-9ad8-5bce-b749-e645aedee181', 'data_vg': 'ceph-9d6733ad-9ad8-5bce-b749-e645aedee181'})  2026-03-05 00:43:11.160015 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:11.160026 | orchestrator | 2026-03-05 00:43:11.160037 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-05 00:43:11.160047 | orchestrator | Thursday 05 March 2026 00:43:10 +0000 (0:00:00.159) 0:00:16.012 ******** 2026-03-05 00:43:11.160058 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:43:11.160069 | orchestrator | 2026-03-05 00:43:11.160101 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-05 00:43:11.160112 | orchestrator | Thursday 05 March 2026 00:43:10 +0000 (0:00:00.149) 0:00:16.162 ******** 2026-03-05 00:43:11.160123 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f88409fd-5147-5194-8288-2488b5e44352', 'data_vg': 'ceph-f88409fd-5147-5194-8288-2488b5e44352'})  2026-03-05 00:43:11.160134 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9d6733ad-9ad8-5bce-b749-e645aedee181', 'data_vg': 'ceph-9d6733ad-9ad8-5bce-b749-e645aedee181'})  2026-03-05 00:43:11.160144 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:11.160155 | orchestrator | 2026-03-05 00:43:11.160166 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-05 00:43:11.160177 | orchestrator | Thursday 05 March 2026 00:43:10 +0000 (0:00:00.162) 0:00:16.324 ******** 2026-03-05 00:43:11.160188 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f88409fd-5147-5194-8288-2488b5e44352', 'data_vg': 'ceph-f88409fd-5147-5194-8288-2488b5e44352'})  2026-03-05 00:43:11.160198 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9d6733ad-9ad8-5bce-b749-e645aedee181', 'data_vg': 'ceph-9d6733ad-9ad8-5bce-b749-e645aedee181'})  2026-03-05 00:43:11.160209 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:11.160220 | orchestrator | 2026-03-05 00:43:11.160232 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-05 00:43:11.160260 | orchestrator | Thursday 05 March 2026 00:43:10 +0000 (0:00:00.173) 0:00:16.498 ******** 2026-03-05 00:43:11.160278 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f88409fd-5147-5194-8288-2488b5e44352', 'data_vg': 'ceph-f88409fd-5147-5194-8288-2488b5e44352'})  2026-03-05 00:43:11.160293 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9d6733ad-9ad8-5bce-b749-e645aedee181', 'data_vg': 'ceph-9d6733ad-9ad8-5bce-b749-e645aedee181'})  2026-03-05 00:43:11.160309 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:11.160325 | orchestrator | 2026-03-05 00:43:11.160342 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-05 00:43:11.160358 | orchestrator | Thursday 05 March 2026 00:43:11 +0000 (0:00:00.163) 0:00:16.661 ******** 2026-03-05 00:43:11.160373 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:11.160391 | orchestrator | 2026-03-05 00:43:11.160408 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-05 00:43:11.160439 | orchestrator | Thursday 05 March 2026 00:43:11 +0000 (0:00:00.122) 0:00:16.783 ******** 2026-03-05 00:43:17.626692 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:17.626779 | orchestrator | 2026-03-05 00:43:17.626791 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-05 00:43:17.626800 | orchestrator | Thursday 05 March 2026 00:43:11 +0000 (0:00:00.142) 0:00:16.926 ******** 2026-03-05 00:43:17.626808 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:17.626815 | orchestrator | 2026-03-05 00:43:17.626823 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-05 00:43:17.626830 | orchestrator | Thursday 05 March 2026 00:43:11 +0000 (0:00:00.137) 0:00:17.064 ******** 2026-03-05 00:43:17.626837 | orchestrator | ok: [testbed-node-3] => { 2026-03-05 00:43:17.626845 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-05 00:43:17.626852 | orchestrator | } 2026-03-05 00:43:17.626860 | orchestrator | 2026-03-05 00:43:17.626882 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-05 00:43:17.626893 | orchestrator | Thursday 05 March 2026 00:43:11 +0000 (0:00:00.350) 0:00:17.414 ******** 2026-03-05 00:43:17.626904 | orchestrator | ok: [testbed-node-3] => { 2026-03-05 00:43:17.626916 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-05 00:43:17.626927 | orchestrator | } 2026-03-05 00:43:17.626939 | orchestrator | 2026-03-05 00:43:17.626953 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-05 00:43:17.626966 | orchestrator | Thursday 05 March 2026 00:43:11 +0000 (0:00:00.143) 0:00:17.558 ******** 2026-03-05 00:43:17.626976 | orchestrator | ok: [testbed-node-3] => { 2026-03-05 00:43:17.626983 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-05 00:43:17.626990 | orchestrator | } 2026-03-05 00:43:17.626996 | orchestrator | 2026-03-05 00:43:17.627003 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-05 00:43:17.627010 | orchestrator | Thursday 05 March 2026 00:43:12 +0000 (0:00:00.150) 0:00:17.709 ******** 2026-03-05 00:43:17.627017 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:43:17.627024 | orchestrator | 2026-03-05 00:43:17.627030 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-05 00:43:17.627037 | orchestrator | Thursday 05 March 2026 00:43:12 +0000 (0:00:00.689) 0:00:18.399 ******** 2026-03-05 00:43:17.627044 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:43:17.627050 | orchestrator | 2026-03-05 00:43:17.627057 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-05 00:43:17.627063 | orchestrator | Thursday 05 March 2026 00:43:13 +0000 (0:00:00.495) 0:00:18.894 ******** 2026-03-05 00:43:17.627070 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:43:17.627104 | orchestrator | 2026-03-05 00:43:17.627112 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-05 00:43:17.627118 | orchestrator | Thursday 05 March 2026 00:43:13 +0000 (0:00:00.515) 0:00:19.409 ******** 2026-03-05 00:43:17.627125 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:43:17.627132 | orchestrator | 2026-03-05 00:43:17.627163 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-05 00:43:17.627170 | orchestrator | Thursday 05 March 2026 00:43:13 +0000 (0:00:00.152) 0:00:19.562 ******** 2026-03-05 00:43:17.627177 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:17.627183 | orchestrator | 2026-03-05 00:43:17.627190 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-05 00:43:17.627196 | orchestrator | Thursday 05 March 2026 00:43:14 +0000 (0:00:00.109) 0:00:19.671 ******** 2026-03-05 00:43:17.627203 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:17.627210 | orchestrator | 2026-03-05 00:43:17.627216 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-05 00:43:17.627223 | orchestrator | Thursday 05 March 2026 00:43:14 +0000 (0:00:00.126) 0:00:19.797 ******** 2026-03-05 00:43:17.627231 | orchestrator | ok: [testbed-node-3] => { 2026-03-05 00:43:17.627239 | orchestrator |  "vgs_report": { 2026-03-05 00:43:17.627247 | orchestrator |  "vg": [] 2026-03-05 00:43:17.627254 | orchestrator |  } 2026-03-05 00:43:17.627262 | orchestrator | } 2026-03-05 00:43:17.627270 | orchestrator | 2026-03-05 00:43:17.627278 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-05 00:43:17.627286 | orchestrator | Thursday 05 March 2026 00:43:14 +0000 (0:00:00.155) 0:00:19.952 ******** 2026-03-05 00:43:17.627294 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:17.627302 | orchestrator | 2026-03-05 00:43:17.627309 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-05 00:43:17.627316 | orchestrator | Thursday 05 March 2026 00:43:14 +0000 (0:00:00.134) 0:00:20.086 ******** 2026-03-05 00:43:17.627324 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:17.627331 | orchestrator | 2026-03-05 00:43:17.627339 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-05 00:43:17.627346 | orchestrator | Thursday 05 March 2026 00:43:14 +0000 (0:00:00.136) 0:00:20.223 ******** 2026-03-05 00:43:17.627354 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:17.627362 | orchestrator | 2026-03-05 00:43:17.627369 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-05 00:43:17.627377 | orchestrator | Thursday 05 March 2026 00:43:14 +0000 (0:00:00.366) 0:00:20.590 ******** 2026-03-05 00:43:17.627384 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:17.627392 | orchestrator | 2026-03-05 00:43:17.627400 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-05 00:43:17.627407 | orchestrator | Thursday 05 March 2026 00:43:15 +0000 (0:00:00.160) 0:00:20.750 ******** 2026-03-05 00:43:17.627415 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:17.627422 | orchestrator | 2026-03-05 00:43:17.627430 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-05 00:43:17.627437 | orchestrator | Thursday 05 March 2026 00:43:15 +0000 (0:00:00.156) 0:00:20.907 ******** 2026-03-05 00:43:17.627445 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:17.627453 | orchestrator | 2026-03-05 00:43:17.627460 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-05 00:43:17.627468 | orchestrator | Thursday 05 March 2026 00:43:15 +0000 (0:00:00.135) 0:00:21.043 ******** 2026-03-05 00:43:17.627476 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:17.627484 | orchestrator | 2026-03-05 00:43:17.627492 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-05 00:43:17.627500 | orchestrator | Thursday 05 March 2026 00:43:15 +0000 (0:00:00.124) 0:00:21.167 ******** 2026-03-05 00:43:17.627520 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:17.627528 | orchestrator | 2026-03-05 00:43:17.627536 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-05 00:43:17.627544 | orchestrator | Thursday 05 March 2026 00:43:15 +0000 (0:00:00.130) 0:00:21.298 ******** 2026-03-05 00:43:17.627552 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:17.627559 | orchestrator | 2026-03-05 00:43:17.627565 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-05 00:43:17.627578 | orchestrator | Thursday 05 March 2026 00:43:15 +0000 (0:00:00.138) 0:00:21.437 ******** 2026-03-05 00:43:17.627585 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:17.627592 | orchestrator | 2026-03-05 00:43:17.627598 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-05 00:43:17.627605 | orchestrator | Thursday 05 March 2026 00:43:15 +0000 (0:00:00.146) 0:00:21.583 ******** 2026-03-05 00:43:17.627611 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:17.627618 | orchestrator | 2026-03-05 00:43:17.627624 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-05 00:43:17.627631 | orchestrator | Thursday 05 March 2026 00:43:16 +0000 (0:00:00.134) 0:00:21.717 ******** 2026-03-05 00:43:17.627637 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:17.627644 | orchestrator | 2026-03-05 00:43:17.627650 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-05 00:43:17.627657 | orchestrator | Thursday 05 March 2026 00:43:16 +0000 (0:00:00.162) 0:00:21.880 ******** 2026-03-05 00:43:17.627663 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:17.627670 | orchestrator | 2026-03-05 00:43:17.627676 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-05 00:43:17.627683 | orchestrator | Thursday 05 March 2026 00:43:16 +0000 (0:00:00.131) 0:00:22.011 ******** 2026-03-05 00:43:17.627689 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:17.627696 | orchestrator | 2026-03-05 00:43:17.627702 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-05 00:43:17.627709 | orchestrator | Thursday 05 March 2026 00:43:16 +0000 (0:00:00.135) 0:00:22.147 ******** 2026-03-05 00:43:17.627717 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f88409fd-5147-5194-8288-2488b5e44352', 'data_vg': 'ceph-f88409fd-5147-5194-8288-2488b5e44352'})  2026-03-05 00:43:17.627725 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9d6733ad-9ad8-5bce-b749-e645aedee181', 'data_vg': 'ceph-9d6733ad-9ad8-5bce-b749-e645aedee181'})  2026-03-05 00:43:17.627731 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:17.627738 | orchestrator | 2026-03-05 00:43:17.627745 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-05 00:43:17.627754 | orchestrator | Thursday 05 March 2026 00:43:16 +0000 (0:00:00.376) 0:00:22.523 ******** 2026-03-05 00:43:17.627761 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f88409fd-5147-5194-8288-2488b5e44352', 'data_vg': 'ceph-f88409fd-5147-5194-8288-2488b5e44352'})  2026-03-05 00:43:17.627768 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9d6733ad-9ad8-5bce-b749-e645aedee181', 'data_vg': 'ceph-9d6733ad-9ad8-5bce-b749-e645aedee181'})  2026-03-05 00:43:17.627774 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:17.627781 | orchestrator | 2026-03-05 00:43:17.627788 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-05 00:43:17.627794 | orchestrator | Thursday 05 March 2026 00:43:17 +0000 (0:00:00.155) 0:00:22.678 ******** 2026-03-05 00:43:17.627801 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f88409fd-5147-5194-8288-2488b5e44352', 'data_vg': 'ceph-f88409fd-5147-5194-8288-2488b5e44352'})  2026-03-05 00:43:17.627807 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9d6733ad-9ad8-5bce-b749-e645aedee181', 'data_vg': 'ceph-9d6733ad-9ad8-5bce-b749-e645aedee181'})  2026-03-05 00:43:17.627814 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:17.627820 | orchestrator | 2026-03-05 00:43:17.627827 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-05 00:43:17.627833 | orchestrator | Thursday 05 March 2026 00:43:17 +0000 (0:00:00.162) 0:00:22.841 ******** 2026-03-05 00:43:17.627840 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f88409fd-5147-5194-8288-2488b5e44352', 'data_vg': 'ceph-f88409fd-5147-5194-8288-2488b5e44352'})  2026-03-05 00:43:17.627847 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9d6733ad-9ad8-5bce-b749-e645aedee181', 'data_vg': 'ceph-9d6733ad-9ad8-5bce-b749-e645aedee181'})  2026-03-05 00:43:17.627859 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:17.627866 | orchestrator | 2026-03-05 00:43:17.627872 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-05 00:43:17.627879 | orchestrator | Thursday 05 March 2026 00:43:17 +0000 (0:00:00.181) 0:00:23.022 ******** 2026-03-05 00:43:17.627887 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f88409fd-5147-5194-8288-2488b5e44352', 'data_vg': 'ceph-f88409fd-5147-5194-8288-2488b5e44352'})  2026-03-05 00:43:17.627898 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9d6733ad-9ad8-5bce-b749-e645aedee181', 'data_vg': 'ceph-9d6733ad-9ad8-5bce-b749-e645aedee181'})  2026-03-05 00:43:17.627909 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:17.627920 | orchestrator | 2026-03-05 00:43:17.627932 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-05 00:43:17.627944 | orchestrator | Thursday 05 March 2026 00:43:17 +0000 (0:00:00.174) 0:00:23.196 ******** 2026-03-05 00:43:17.627961 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f88409fd-5147-5194-8288-2488b5e44352', 'data_vg': 'ceph-f88409fd-5147-5194-8288-2488b5e44352'})  2026-03-05 00:43:23.448053 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9d6733ad-9ad8-5bce-b749-e645aedee181', 'data_vg': 'ceph-9d6733ad-9ad8-5bce-b749-e645aedee181'})  2026-03-05 00:43:23.448165 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:23.448173 | orchestrator | 2026-03-05 00:43:23.448179 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-05 00:43:23.448187 | orchestrator | Thursday 05 March 2026 00:43:17 +0000 (0:00:00.174) 0:00:23.371 ******** 2026-03-05 00:43:23.448194 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f88409fd-5147-5194-8288-2488b5e44352', 'data_vg': 'ceph-f88409fd-5147-5194-8288-2488b5e44352'})  2026-03-05 00:43:23.448201 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9d6733ad-9ad8-5bce-b749-e645aedee181', 'data_vg': 'ceph-9d6733ad-9ad8-5bce-b749-e645aedee181'})  2026-03-05 00:43:23.448207 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:23.448214 | orchestrator | 2026-03-05 00:43:23.448221 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-05 00:43:23.448228 | orchestrator | Thursday 05 March 2026 00:43:17 +0000 (0:00:00.157) 0:00:23.528 ******** 2026-03-05 00:43:23.448234 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f88409fd-5147-5194-8288-2488b5e44352', 'data_vg': 'ceph-f88409fd-5147-5194-8288-2488b5e44352'})  2026-03-05 00:43:23.448239 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9d6733ad-9ad8-5bce-b749-e645aedee181', 'data_vg': 'ceph-9d6733ad-9ad8-5bce-b749-e645aedee181'})  2026-03-05 00:43:23.448243 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:23.448247 | orchestrator | 2026-03-05 00:43:23.448250 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-05 00:43:23.448254 | orchestrator | Thursday 05 March 2026 00:43:18 +0000 (0:00:00.170) 0:00:23.699 ******** 2026-03-05 00:43:23.448258 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:43:23.448262 | orchestrator | 2026-03-05 00:43:23.448266 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-05 00:43:23.448270 | orchestrator | Thursday 05 March 2026 00:43:18 +0000 (0:00:00.526) 0:00:24.226 ******** 2026-03-05 00:43:23.448273 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:43:23.448277 | orchestrator | 2026-03-05 00:43:23.448281 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-05 00:43:23.448285 | orchestrator | Thursday 05 March 2026 00:43:19 +0000 (0:00:00.532) 0:00:24.758 ******** 2026-03-05 00:43:23.448288 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:43:23.448292 | orchestrator | 2026-03-05 00:43:23.448296 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-05 00:43:23.448300 | orchestrator | Thursday 05 March 2026 00:43:19 +0000 (0:00:00.158) 0:00:24.917 ******** 2026-03-05 00:43:23.448319 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-9d6733ad-9ad8-5bce-b749-e645aedee181', 'vg_name': 'ceph-9d6733ad-9ad8-5bce-b749-e645aedee181'}) 2026-03-05 00:43:23.448325 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-f88409fd-5147-5194-8288-2488b5e44352', 'vg_name': 'ceph-f88409fd-5147-5194-8288-2488b5e44352'}) 2026-03-05 00:43:23.448329 | orchestrator | 2026-03-05 00:43:23.448333 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-05 00:43:23.448336 | orchestrator | Thursday 05 March 2026 00:43:19 +0000 (0:00:00.171) 0:00:25.089 ******** 2026-03-05 00:43:23.448351 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f88409fd-5147-5194-8288-2488b5e44352', 'data_vg': 'ceph-f88409fd-5147-5194-8288-2488b5e44352'})  2026-03-05 00:43:23.448355 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9d6733ad-9ad8-5bce-b749-e645aedee181', 'data_vg': 'ceph-9d6733ad-9ad8-5bce-b749-e645aedee181'})  2026-03-05 00:43:23.448359 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:23.448363 | orchestrator | 2026-03-05 00:43:23.448367 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-05 00:43:23.448371 | orchestrator | Thursday 05 March 2026 00:43:19 +0000 (0:00:00.388) 0:00:25.477 ******** 2026-03-05 00:43:23.448374 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f88409fd-5147-5194-8288-2488b5e44352', 'data_vg': 'ceph-f88409fd-5147-5194-8288-2488b5e44352'})  2026-03-05 00:43:23.448378 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9d6733ad-9ad8-5bce-b749-e645aedee181', 'data_vg': 'ceph-9d6733ad-9ad8-5bce-b749-e645aedee181'})  2026-03-05 00:43:23.448382 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:23.448386 | orchestrator | 2026-03-05 00:43:23.448389 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-05 00:43:23.448393 | orchestrator | Thursday 05 March 2026 00:43:20 +0000 (0:00:00.212) 0:00:25.690 ******** 2026-03-05 00:43:23.448397 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f88409fd-5147-5194-8288-2488b5e44352', 'data_vg': 'ceph-f88409fd-5147-5194-8288-2488b5e44352'})  2026-03-05 00:43:23.448401 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9d6733ad-9ad8-5bce-b749-e645aedee181', 'data_vg': 'ceph-9d6733ad-9ad8-5bce-b749-e645aedee181'})  2026-03-05 00:43:23.448404 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:23.448408 | orchestrator | 2026-03-05 00:43:23.448412 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-05 00:43:23.448415 | orchestrator | Thursday 05 March 2026 00:43:20 +0000 (0:00:00.182) 0:00:25.873 ******** 2026-03-05 00:43:23.448430 | orchestrator | ok: [testbed-node-3] => { 2026-03-05 00:43:23.448434 | orchestrator |  "lvm_report": { 2026-03-05 00:43:23.448438 | orchestrator |  "lv": [ 2026-03-05 00:43:23.448442 | orchestrator |  { 2026-03-05 00:43:23.448446 | orchestrator |  "lv_name": "osd-block-9d6733ad-9ad8-5bce-b749-e645aedee181", 2026-03-05 00:43:23.448451 | orchestrator |  "vg_name": "ceph-9d6733ad-9ad8-5bce-b749-e645aedee181" 2026-03-05 00:43:23.448455 | orchestrator |  }, 2026-03-05 00:43:23.448458 | orchestrator |  { 2026-03-05 00:43:23.448462 | orchestrator |  "lv_name": "osd-block-f88409fd-5147-5194-8288-2488b5e44352", 2026-03-05 00:43:23.448466 | orchestrator |  "vg_name": "ceph-f88409fd-5147-5194-8288-2488b5e44352" 2026-03-05 00:43:23.448470 | orchestrator |  } 2026-03-05 00:43:23.448473 | orchestrator |  ], 2026-03-05 00:43:23.448477 | orchestrator |  "pv": [ 2026-03-05 00:43:23.448481 | orchestrator |  { 2026-03-05 00:43:23.448484 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-05 00:43:23.448488 | orchestrator |  "vg_name": "ceph-f88409fd-5147-5194-8288-2488b5e44352" 2026-03-05 00:43:23.448492 | orchestrator |  }, 2026-03-05 00:43:23.448496 | orchestrator |  { 2026-03-05 00:43:23.448503 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-05 00:43:23.448507 | orchestrator |  "vg_name": "ceph-9d6733ad-9ad8-5bce-b749-e645aedee181" 2026-03-05 00:43:23.448511 | orchestrator |  } 2026-03-05 00:43:23.448514 | orchestrator |  ] 2026-03-05 00:43:23.448518 | orchestrator |  } 2026-03-05 00:43:23.448522 | orchestrator | } 2026-03-05 00:43:23.448526 | orchestrator | 2026-03-05 00:43:23.448530 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-05 00:43:23.448534 | orchestrator | 2026-03-05 00:43:23.448537 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-05 00:43:23.448541 | orchestrator | Thursday 05 March 2026 00:43:20 +0000 (0:00:00.307) 0:00:26.181 ******** 2026-03-05 00:43:23.448545 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-05 00:43:23.448549 | orchestrator | 2026-03-05 00:43:23.448552 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-05 00:43:23.448556 | orchestrator | Thursday 05 March 2026 00:43:20 +0000 (0:00:00.288) 0:00:26.469 ******** 2026-03-05 00:43:23.448560 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:43:23.448563 | orchestrator | 2026-03-05 00:43:23.448567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:23.448571 | orchestrator | Thursday 05 March 2026 00:43:21 +0000 (0:00:00.246) 0:00:26.716 ******** 2026-03-05 00:43:23.448577 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-05 00:43:23.448581 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-05 00:43:23.448585 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-05 00:43:23.448588 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-05 00:43:23.448592 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-05 00:43:23.448596 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-05 00:43:23.448600 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-05 00:43:23.448604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-05 00:43:23.448608 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-05 00:43:23.448612 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-05 00:43:23.448616 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-05 00:43:23.448621 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-05 00:43:23.448625 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-05 00:43:23.448629 | orchestrator | 2026-03-05 00:43:23.448633 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:23.448637 | orchestrator | Thursday 05 March 2026 00:43:21 +0000 (0:00:00.473) 0:00:27.189 ******** 2026-03-05 00:43:23.448642 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:23.448646 | orchestrator | 2026-03-05 00:43:23.448650 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:23.448654 | orchestrator | Thursday 05 March 2026 00:43:21 +0000 (0:00:00.252) 0:00:27.441 ******** 2026-03-05 00:43:23.448658 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:23.448662 | orchestrator | 2026-03-05 00:43:23.448667 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:23.448671 | orchestrator | Thursday 05 March 2026 00:43:22 +0000 (0:00:00.224) 0:00:27.666 ******** 2026-03-05 00:43:23.448675 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:23.448679 | orchestrator | 2026-03-05 00:43:23.448684 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:23.448691 | orchestrator | Thursday 05 March 2026 00:43:22 +0000 (0:00:00.683) 0:00:28.350 ******** 2026-03-05 00:43:23.448696 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:23.448700 | orchestrator | 2026-03-05 00:43:23.448704 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:23.448708 | orchestrator | Thursday 05 March 2026 00:43:22 +0000 (0:00:00.253) 0:00:28.603 ******** 2026-03-05 00:43:23.448712 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:23.448716 | orchestrator | 2026-03-05 00:43:23.448721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:23.448726 | orchestrator | Thursday 05 March 2026 00:43:23 +0000 (0:00:00.240) 0:00:28.843 ******** 2026-03-05 00:43:23.448730 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:23.448734 | orchestrator | 2026-03-05 00:43:23.448742 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:35.252184 | orchestrator | Thursday 05 March 2026 00:43:23 +0000 (0:00:00.235) 0:00:29.078 ******** 2026-03-05 00:43:35.252305 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:35.252317 | orchestrator | 2026-03-05 00:43:35.252325 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:35.252333 | orchestrator | Thursday 05 March 2026 00:43:23 +0000 (0:00:00.222) 0:00:29.301 ******** 2026-03-05 00:43:35.252339 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:35.252346 | orchestrator | 2026-03-05 00:43:35.252353 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:35.252360 | orchestrator | Thursday 05 March 2026 00:43:23 +0000 (0:00:00.203) 0:00:29.504 ******** 2026-03-05 00:43:35.252367 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f) 2026-03-05 00:43:35.252376 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f) 2026-03-05 00:43:35.252382 | orchestrator | 2026-03-05 00:43:35.252389 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:35.252396 | orchestrator | Thursday 05 March 2026 00:43:24 +0000 (0:00:00.428) 0:00:29.933 ******** 2026-03-05 00:43:35.252402 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fde3cda2-3067-4d86-95c6-d39f62804520) 2026-03-05 00:43:35.252409 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fde3cda2-3067-4d86-95c6-d39f62804520) 2026-03-05 00:43:35.252416 | orchestrator | 2026-03-05 00:43:35.252422 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:35.252429 | orchestrator | Thursday 05 March 2026 00:43:24 +0000 (0:00:00.440) 0:00:30.373 ******** 2026-03-05 00:43:35.252436 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5fc6e5d1-feaa-44be-badf-9551630a8ded) 2026-03-05 00:43:35.252442 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5fc6e5d1-feaa-44be-badf-9551630a8ded) 2026-03-05 00:43:35.252449 | orchestrator | 2026-03-05 00:43:35.252456 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:35.252462 | orchestrator | Thursday 05 March 2026 00:43:25 +0000 (0:00:00.468) 0:00:30.841 ******** 2026-03-05 00:43:35.252486 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4cc63ee5-51dc-4d14-b9fb-faf031b30aaa) 2026-03-05 00:43:35.252493 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4cc63ee5-51dc-4d14-b9fb-faf031b30aaa) 2026-03-05 00:43:35.252500 | orchestrator | 2026-03-05 00:43:35.252506 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:35.252513 | orchestrator | Thursday 05 March 2026 00:43:25 +0000 (0:00:00.720) 0:00:31.561 ******** 2026-03-05 00:43:35.252520 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-05 00:43:35.252527 | orchestrator | 2026-03-05 00:43:35.252533 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:35.252540 | orchestrator | Thursday 05 March 2026 00:43:26 +0000 (0:00:00.605) 0:00:32.167 ******** 2026-03-05 00:43:35.252564 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-05 00:43:35.252572 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-05 00:43:35.252579 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-05 00:43:35.252585 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-05 00:43:35.252592 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-05 00:43:35.252598 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-05 00:43:35.252605 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-05 00:43:35.252611 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-05 00:43:35.252618 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-05 00:43:35.252624 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-05 00:43:35.252631 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-05 00:43:35.252637 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-05 00:43:35.252644 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-05 00:43:35.252651 | orchestrator | 2026-03-05 00:43:35.252658 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:35.252664 | orchestrator | Thursday 05 March 2026 00:43:27 +0000 (0:00:00.910) 0:00:33.078 ******** 2026-03-05 00:43:35.252671 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:35.252677 | orchestrator | 2026-03-05 00:43:35.252684 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:35.252691 | orchestrator | Thursday 05 March 2026 00:43:27 +0000 (0:00:00.227) 0:00:33.306 ******** 2026-03-05 00:43:35.252697 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:35.252704 | orchestrator | 2026-03-05 00:43:35.252710 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:35.252717 | orchestrator | Thursday 05 March 2026 00:43:27 +0000 (0:00:00.205) 0:00:33.512 ******** 2026-03-05 00:43:35.252724 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:35.252730 | orchestrator | 2026-03-05 00:43:35.252751 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:35.252758 | orchestrator | Thursday 05 March 2026 00:43:28 +0000 (0:00:00.211) 0:00:33.723 ******** 2026-03-05 00:43:35.252765 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:35.252772 | orchestrator | 2026-03-05 00:43:35.252778 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:35.252785 | orchestrator | Thursday 05 March 2026 00:43:28 +0000 (0:00:00.212) 0:00:33.936 ******** 2026-03-05 00:43:35.252791 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:35.252798 | orchestrator | 2026-03-05 00:43:35.252805 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:35.252811 | orchestrator | Thursday 05 March 2026 00:43:28 +0000 (0:00:00.234) 0:00:34.170 ******** 2026-03-05 00:43:35.252818 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:35.252824 | orchestrator | 2026-03-05 00:43:35.252831 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:35.252838 | orchestrator | Thursday 05 March 2026 00:43:28 +0000 (0:00:00.248) 0:00:34.418 ******** 2026-03-05 00:43:35.252844 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:35.252851 | orchestrator | 2026-03-05 00:43:35.252857 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:35.252864 | orchestrator | Thursday 05 March 2026 00:43:29 +0000 (0:00:00.216) 0:00:34.635 ******** 2026-03-05 00:43:35.252889 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:35.252897 | orchestrator | 2026-03-05 00:43:35.252904 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:35.252911 | orchestrator | Thursday 05 March 2026 00:43:29 +0000 (0:00:00.201) 0:00:34.836 ******** 2026-03-05 00:43:35.252918 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-05 00:43:35.252925 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-05 00:43:35.252932 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-05 00:43:35.252938 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-05 00:43:35.252945 | orchestrator | 2026-03-05 00:43:35.252952 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:35.252958 | orchestrator | Thursday 05 March 2026 00:43:30 +0000 (0:00:01.003) 0:00:35.840 ******** 2026-03-05 00:43:35.252965 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:35.252972 | orchestrator | 2026-03-05 00:43:35.252978 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:35.252985 | orchestrator | Thursday 05 March 2026 00:43:30 +0000 (0:00:00.196) 0:00:36.036 ******** 2026-03-05 00:43:35.252992 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:35.252999 | orchestrator | 2026-03-05 00:43:35.253005 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:35.253018 | orchestrator | Thursday 05 March 2026 00:43:31 +0000 (0:00:00.716) 0:00:36.753 ******** 2026-03-05 00:43:35.253025 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:35.253032 | orchestrator | 2026-03-05 00:43:35.253039 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:35.253046 | orchestrator | Thursday 05 March 2026 00:43:31 +0000 (0:00:00.211) 0:00:36.965 ******** 2026-03-05 00:43:35.253052 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:35.253059 | orchestrator | 2026-03-05 00:43:35.253085 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-05 00:43:35.253095 | orchestrator | Thursday 05 March 2026 00:43:31 +0000 (0:00:00.215) 0:00:37.180 ******** 2026-03-05 00:43:35.253102 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:35.253109 | orchestrator | 2026-03-05 00:43:35.253115 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-05 00:43:35.253122 | orchestrator | Thursday 05 March 2026 00:43:31 +0000 (0:00:00.146) 0:00:37.327 ******** 2026-03-05 00:43:35.253129 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '130794de-baff-5f0b-9c30-9a8206b73831'}}) 2026-03-05 00:43:35.253136 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '54671a7c-dad9-563e-9508-4448c9acfc6a'}}) 2026-03-05 00:43:35.253143 | orchestrator | 2026-03-05 00:43:35.253149 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-05 00:43:35.253156 | orchestrator | Thursday 05 March 2026 00:43:31 +0000 (0:00:00.259) 0:00:37.587 ******** 2026-03-05 00:43:35.253164 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-130794de-baff-5f0b-9c30-9a8206b73831', 'data_vg': 'ceph-130794de-baff-5f0b-9c30-9a8206b73831'}) 2026-03-05 00:43:35.253172 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-54671a7c-dad9-563e-9508-4448c9acfc6a', 'data_vg': 'ceph-54671a7c-dad9-563e-9508-4448c9acfc6a'}) 2026-03-05 00:43:35.253179 | orchestrator | 2026-03-05 00:43:35.253186 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-05 00:43:35.253192 | orchestrator | Thursday 05 March 2026 00:43:33 +0000 (0:00:01.823) 0:00:39.411 ******** 2026-03-05 00:43:35.253199 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-130794de-baff-5f0b-9c30-9a8206b73831', 'data_vg': 'ceph-130794de-baff-5f0b-9c30-9a8206b73831'})  2026-03-05 00:43:35.253206 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54671a7c-dad9-563e-9508-4448c9acfc6a', 'data_vg': 'ceph-54671a7c-dad9-563e-9508-4448c9acfc6a'})  2026-03-05 00:43:35.253219 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:35.253225 | orchestrator | 2026-03-05 00:43:35.253232 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-05 00:43:35.253239 | orchestrator | Thursday 05 March 2026 00:43:33 +0000 (0:00:00.184) 0:00:39.596 ******** 2026-03-05 00:43:35.253245 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-130794de-baff-5f0b-9c30-9a8206b73831', 'data_vg': 'ceph-130794de-baff-5f0b-9c30-9a8206b73831'}) 2026-03-05 00:43:35.253257 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-54671a7c-dad9-563e-9508-4448c9acfc6a', 'data_vg': 'ceph-54671a7c-dad9-563e-9508-4448c9acfc6a'}) 2026-03-05 00:43:41.221959 | orchestrator | 2026-03-05 00:43:41.226219 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-05 00:43:41.227556 | orchestrator | Thursday 05 March 2026 00:43:35 +0000 (0:00:01.399) 0:00:40.996 ******** 2026-03-05 00:43:41.227589 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-130794de-baff-5f0b-9c30-9a8206b73831', 'data_vg': 'ceph-130794de-baff-5f0b-9c30-9a8206b73831'})  2026-03-05 00:43:41.227599 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54671a7c-dad9-563e-9508-4448c9acfc6a', 'data_vg': 'ceph-54671a7c-dad9-563e-9508-4448c9acfc6a'})  2026-03-05 00:43:41.227607 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:41.227624 | orchestrator | 2026-03-05 00:43:41.227631 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-05 00:43:41.227638 | orchestrator | Thursday 05 March 2026 00:43:35 +0000 (0:00:00.194) 0:00:41.190 ******** 2026-03-05 00:43:41.227645 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:41.227651 | orchestrator | 2026-03-05 00:43:41.227658 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-05 00:43:41.227664 | orchestrator | Thursday 05 March 2026 00:43:35 +0000 (0:00:00.159) 0:00:41.350 ******** 2026-03-05 00:43:41.227671 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-130794de-baff-5f0b-9c30-9a8206b73831', 'data_vg': 'ceph-130794de-baff-5f0b-9c30-9a8206b73831'})  2026-03-05 00:43:41.227677 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54671a7c-dad9-563e-9508-4448c9acfc6a', 'data_vg': 'ceph-54671a7c-dad9-563e-9508-4448c9acfc6a'})  2026-03-05 00:43:41.227684 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:41.227690 | orchestrator | 2026-03-05 00:43:41.227696 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-05 00:43:41.227703 | orchestrator | Thursday 05 March 2026 00:43:35 +0000 (0:00:00.158) 0:00:41.509 ******** 2026-03-05 00:43:41.227709 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:41.227715 | orchestrator | 2026-03-05 00:43:41.227721 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-05 00:43:41.227753 | orchestrator | Thursday 05 March 2026 00:43:36 +0000 (0:00:00.144) 0:00:41.654 ******** 2026-03-05 00:43:41.227760 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-130794de-baff-5f0b-9c30-9a8206b73831', 'data_vg': 'ceph-130794de-baff-5f0b-9c30-9a8206b73831'})  2026-03-05 00:43:41.227767 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54671a7c-dad9-563e-9508-4448c9acfc6a', 'data_vg': 'ceph-54671a7c-dad9-563e-9508-4448c9acfc6a'})  2026-03-05 00:43:41.227774 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:41.227780 | orchestrator | 2026-03-05 00:43:41.227786 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-05 00:43:41.227793 | orchestrator | Thursday 05 March 2026 00:43:36 +0000 (0:00:00.396) 0:00:42.050 ******** 2026-03-05 00:43:41.227799 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:41.227805 | orchestrator | 2026-03-05 00:43:41.227811 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-05 00:43:41.227818 | orchestrator | Thursday 05 March 2026 00:43:36 +0000 (0:00:00.164) 0:00:42.215 ******** 2026-03-05 00:43:41.227824 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-130794de-baff-5f0b-9c30-9a8206b73831', 'data_vg': 'ceph-130794de-baff-5f0b-9c30-9a8206b73831'})  2026-03-05 00:43:41.227857 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54671a7c-dad9-563e-9508-4448c9acfc6a', 'data_vg': 'ceph-54671a7c-dad9-563e-9508-4448c9acfc6a'})  2026-03-05 00:43:41.227864 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:41.227870 | orchestrator | 2026-03-05 00:43:41.227877 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-05 00:43:41.227883 | orchestrator | Thursday 05 March 2026 00:43:36 +0000 (0:00:00.151) 0:00:42.366 ******** 2026-03-05 00:43:41.227889 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:43:41.227897 | orchestrator | 2026-03-05 00:43:41.227903 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-05 00:43:41.227910 | orchestrator | Thursday 05 March 2026 00:43:36 +0000 (0:00:00.137) 0:00:42.504 ******** 2026-03-05 00:43:41.227916 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-130794de-baff-5f0b-9c30-9a8206b73831', 'data_vg': 'ceph-130794de-baff-5f0b-9c30-9a8206b73831'})  2026-03-05 00:43:41.227922 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54671a7c-dad9-563e-9508-4448c9acfc6a', 'data_vg': 'ceph-54671a7c-dad9-563e-9508-4448c9acfc6a'})  2026-03-05 00:43:41.227928 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:41.227935 | orchestrator | 2026-03-05 00:43:41.227941 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-05 00:43:41.227947 | orchestrator | Thursday 05 March 2026 00:43:37 +0000 (0:00:00.155) 0:00:42.659 ******** 2026-03-05 00:43:41.227953 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-130794de-baff-5f0b-9c30-9a8206b73831', 'data_vg': 'ceph-130794de-baff-5f0b-9c30-9a8206b73831'})  2026-03-05 00:43:41.227960 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54671a7c-dad9-563e-9508-4448c9acfc6a', 'data_vg': 'ceph-54671a7c-dad9-563e-9508-4448c9acfc6a'})  2026-03-05 00:43:41.227966 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:41.227972 | orchestrator | 2026-03-05 00:43:41.227978 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-05 00:43:41.229359 | orchestrator | Thursday 05 March 2026 00:43:37 +0000 (0:00:00.164) 0:00:42.823 ******** 2026-03-05 00:43:41.229400 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-130794de-baff-5f0b-9c30-9a8206b73831', 'data_vg': 'ceph-130794de-baff-5f0b-9c30-9a8206b73831'})  2026-03-05 00:43:41.229409 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54671a7c-dad9-563e-9508-4448c9acfc6a', 'data_vg': 'ceph-54671a7c-dad9-563e-9508-4448c9acfc6a'})  2026-03-05 00:43:41.229419 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:41.229429 | orchestrator | 2026-03-05 00:43:41.229439 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-05 00:43:41.229448 | orchestrator | Thursday 05 March 2026 00:43:37 +0000 (0:00:00.159) 0:00:42.983 ******** 2026-03-05 00:43:41.229457 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:41.229467 | orchestrator | 2026-03-05 00:43:41.229475 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-05 00:43:41.229484 | orchestrator | Thursday 05 March 2026 00:43:37 +0000 (0:00:00.160) 0:00:43.143 ******** 2026-03-05 00:43:41.229492 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:41.229497 | orchestrator | 2026-03-05 00:43:41.229503 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-05 00:43:41.229509 | orchestrator | Thursday 05 March 2026 00:43:37 +0000 (0:00:00.147) 0:00:43.290 ******** 2026-03-05 00:43:41.229515 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:41.229520 | orchestrator | 2026-03-05 00:43:41.229526 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-05 00:43:41.229531 | orchestrator | Thursday 05 March 2026 00:43:37 +0000 (0:00:00.155) 0:00:43.446 ******** 2026-03-05 00:43:41.229537 | orchestrator | ok: [testbed-node-4] => { 2026-03-05 00:43:41.229543 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-05 00:43:41.229566 | orchestrator | } 2026-03-05 00:43:41.229571 | orchestrator | 2026-03-05 00:43:41.229577 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-05 00:43:41.229582 | orchestrator | Thursday 05 March 2026 00:43:37 +0000 (0:00:00.165) 0:00:43.612 ******** 2026-03-05 00:43:41.229588 | orchestrator | ok: [testbed-node-4] => { 2026-03-05 00:43:41.229593 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-05 00:43:41.229598 | orchestrator | } 2026-03-05 00:43:41.229604 | orchestrator | 2026-03-05 00:43:41.229621 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-05 00:43:41.229626 | orchestrator | Thursday 05 March 2026 00:43:38 +0000 (0:00:00.168) 0:00:43.780 ******** 2026-03-05 00:43:41.229632 | orchestrator | ok: [testbed-node-4] => { 2026-03-05 00:43:41.229637 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-05 00:43:41.229643 | orchestrator | } 2026-03-05 00:43:41.229648 | orchestrator | 2026-03-05 00:43:41.229654 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-05 00:43:41.229659 | orchestrator | Thursday 05 March 2026 00:43:38 +0000 (0:00:00.364) 0:00:44.144 ******** 2026-03-05 00:43:41.229665 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:43:41.229671 | orchestrator | 2026-03-05 00:43:41.229676 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-05 00:43:41.229681 | orchestrator | Thursday 05 March 2026 00:43:39 +0000 (0:00:00.550) 0:00:44.695 ******** 2026-03-05 00:43:41.229687 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:43:41.229692 | orchestrator | 2026-03-05 00:43:41.229697 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-05 00:43:41.229703 | orchestrator | Thursday 05 March 2026 00:43:39 +0000 (0:00:00.533) 0:00:45.228 ******** 2026-03-05 00:43:41.229708 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:43:41.229713 | orchestrator | 2026-03-05 00:43:41.229719 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-05 00:43:41.229724 | orchestrator | Thursday 05 March 2026 00:43:40 +0000 (0:00:00.492) 0:00:45.720 ******** 2026-03-05 00:43:41.229729 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:43:41.229735 | orchestrator | 2026-03-05 00:43:41.229740 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-05 00:43:41.229745 | orchestrator | Thursday 05 March 2026 00:43:40 +0000 (0:00:00.152) 0:00:45.873 ******** 2026-03-05 00:43:41.229751 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:41.229756 | orchestrator | 2026-03-05 00:43:41.229761 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-05 00:43:41.229767 | orchestrator | Thursday 05 March 2026 00:43:40 +0000 (0:00:00.123) 0:00:45.996 ******** 2026-03-05 00:43:41.229772 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:41.229777 | orchestrator | 2026-03-05 00:43:41.229783 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-05 00:43:41.229788 | orchestrator | Thursday 05 March 2026 00:43:40 +0000 (0:00:00.119) 0:00:46.116 ******** 2026-03-05 00:43:41.229794 | orchestrator | ok: [testbed-node-4] => { 2026-03-05 00:43:41.229799 | orchestrator |  "vgs_report": { 2026-03-05 00:43:41.229805 | orchestrator |  "vg": [] 2026-03-05 00:43:41.229811 | orchestrator |  } 2026-03-05 00:43:41.229816 | orchestrator | } 2026-03-05 00:43:41.229822 | orchestrator | 2026-03-05 00:43:41.229827 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-05 00:43:41.229833 | orchestrator | Thursday 05 March 2026 00:43:40 +0000 (0:00:00.143) 0:00:46.260 ******** 2026-03-05 00:43:41.229838 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:41.229844 | orchestrator | 2026-03-05 00:43:41.229849 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-05 00:43:41.229854 | orchestrator | Thursday 05 March 2026 00:43:40 +0000 (0:00:00.168) 0:00:46.428 ******** 2026-03-05 00:43:41.229860 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:41.229865 | orchestrator | 2026-03-05 00:43:41.229871 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-05 00:43:41.229881 | orchestrator | Thursday 05 March 2026 00:43:40 +0000 (0:00:00.145) 0:00:46.574 ******** 2026-03-05 00:43:41.229887 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:41.229892 | orchestrator | 2026-03-05 00:43:41.229897 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-05 00:43:41.229903 | orchestrator | Thursday 05 March 2026 00:43:41 +0000 (0:00:00.117) 0:00:46.691 ******** 2026-03-05 00:43:41.229909 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:41.229918 | orchestrator | 2026-03-05 00:43:41.229934 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-05 00:43:46.027901 | orchestrator | Thursday 05 March 2026 00:43:41 +0000 (0:00:00.152) 0:00:46.843 ******** 2026-03-05 00:43:46.028038 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:46.028054 | orchestrator | 2026-03-05 00:43:46.028155 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-05 00:43:46.028168 | orchestrator | Thursday 05 March 2026 00:43:41 +0000 (0:00:00.337) 0:00:47.181 ******** 2026-03-05 00:43:46.028179 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:46.028190 | orchestrator | 2026-03-05 00:43:46.028202 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-05 00:43:46.028214 | orchestrator | Thursday 05 March 2026 00:43:41 +0000 (0:00:00.171) 0:00:47.352 ******** 2026-03-05 00:43:46.028224 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:46.028235 | orchestrator | 2026-03-05 00:43:46.028247 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-05 00:43:46.028258 | orchestrator | Thursday 05 March 2026 00:43:41 +0000 (0:00:00.154) 0:00:47.506 ******** 2026-03-05 00:43:46.028269 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:46.028280 | orchestrator | 2026-03-05 00:43:46.028291 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-05 00:43:46.028302 | orchestrator | Thursday 05 March 2026 00:43:42 +0000 (0:00:00.142) 0:00:47.649 ******** 2026-03-05 00:43:46.028312 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:46.028323 | orchestrator | 2026-03-05 00:43:46.028335 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-05 00:43:46.028346 | orchestrator | Thursday 05 March 2026 00:43:42 +0000 (0:00:00.140) 0:00:47.790 ******** 2026-03-05 00:43:46.028357 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:46.028368 | orchestrator | 2026-03-05 00:43:46.028379 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-05 00:43:46.028390 | orchestrator | Thursday 05 March 2026 00:43:42 +0000 (0:00:00.145) 0:00:47.936 ******** 2026-03-05 00:43:46.028404 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:46.028417 | orchestrator | 2026-03-05 00:43:46.028430 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-05 00:43:46.028443 | orchestrator | Thursday 05 March 2026 00:43:42 +0000 (0:00:00.139) 0:00:48.075 ******** 2026-03-05 00:43:46.028455 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:46.028468 | orchestrator | 2026-03-05 00:43:46.028482 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-05 00:43:46.028495 | orchestrator | Thursday 05 March 2026 00:43:42 +0000 (0:00:00.140) 0:00:48.216 ******** 2026-03-05 00:43:46.028507 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:46.028521 | orchestrator | 2026-03-05 00:43:46.028534 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-05 00:43:46.028547 | orchestrator | Thursday 05 March 2026 00:43:42 +0000 (0:00:00.137) 0:00:48.354 ******** 2026-03-05 00:43:46.028560 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:46.028573 | orchestrator | 2026-03-05 00:43:46.028586 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-05 00:43:46.028599 | orchestrator | Thursday 05 March 2026 00:43:42 +0000 (0:00:00.150) 0:00:48.504 ******** 2026-03-05 00:43:46.028614 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-130794de-baff-5f0b-9c30-9a8206b73831', 'data_vg': 'ceph-130794de-baff-5f0b-9c30-9a8206b73831'})  2026-03-05 00:43:46.028720 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54671a7c-dad9-563e-9508-4448c9acfc6a', 'data_vg': 'ceph-54671a7c-dad9-563e-9508-4448c9acfc6a'})  2026-03-05 00:43:46.028737 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:46.028750 | orchestrator | 2026-03-05 00:43:46.028764 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-05 00:43:46.028778 | orchestrator | Thursday 05 March 2026 00:43:43 +0000 (0:00:00.171) 0:00:48.676 ******** 2026-03-05 00:43:46.028789 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-130794de-baff-5f0b-9c30-9a8206b73831', 'data_vg': 'ceph-130794de-baff-5f0b-9c30-9a8206b73831'})  2026-03-05 00:43:46.028800 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54671a7c-dad9-563e-9508-4448c9acfc6a', 'data_vg': 'ceph-54671a7c-dad9-563e-9508-4448c9acfc6a'})  2026-03-05 00:43:46.028811 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:46.028821 | orchestrator | 2026-03-05 00:43:46.028832 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-05 00:43:46.028843 | orchestrator | Thursday 05 March 2026 00:43:43 +0000 (0:00:00.166) 0:00:48.843 ******** 2026-03-05 00:43:46.028854 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-130794de-baff-5f0b-9c30-9a8206b73831', 'data_vg': 'ceph-130794de-baff-5f0b-9c30-9a8206b73831'})  2026-03-05 00:43:46.028865 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54671a7c-dad9-563e-9508-4448c9acfc6a', 'data_vg': 'ceph-54671a7c-dad9-563e-9508-4448c9acfc6a'})  2026-03-05 00:43:46.028876 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:46.028886 | orchestrator | 2026-03-05 00:43:46.028897 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-05 00:43:46.028908 | orchestrator | Thursday 05 March 2026 00:43:43 +0000 (0:00:00.404) 0:00:49.247 ******** 2026-03-05 00:43:46.028919 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-130794de-baff-5f0b-9c30-9a8206b73831', 'data_vg': 'ceph-130794de-baff-5f0b-9c30-9a8206b73831'})  2026-03-05 00:43:46.028930 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54671a7c-dad9-563e-9508-4448c9acfc6a', 'data_vg': 'ceph-54671a7c-dad9-563e-9508-4448c9acfc6a'})  2026-03-05 00:43:46.028941 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:46.028952 | orchestrator | 2026-03-05 00:43:46.028999 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-05 00:43:46.029019 | orchestrator | Thursday 05 March 2026 00:43:43 +0000 (0:00:00.168) 0:00:49.415 ******** 2026-03-05 00:43:46.029037 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-130794de-baff-5f0b-9c30-9a8206b73831', 'data_vg': 'ceph-130794de-baff-5f0b-9c30-9a8206b73831'})  2026-03-05 00:43:46.029056 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54671a7c-dad9-563e-9508-4448c9acfc6a', 'data_vg': 'ceph-54671a7c-dad9-563e-9508-4448c9acfc6a'})  2026-03-05 00:43:46.029101 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:46.029120 | orchestrator | 2026-03-05 00:43:46.029138 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-05 00:43:46.029157 | orchestrator | Thursday 05 March 2026 00:43:43 +0000 (0:00:00.165) 0:00:49.580 ******** 2026-03-05 00:43:46.029177 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-130794de-baff-5f0b-9c30-9a8206b73831', 'data_vg': 'ceph-130794de-baff-5f0b-9c30-9a8206b73831'})  2026-03-05 00:43:46.029197 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54671a7c-dad9-563e-9508-4448c9acfc6a', 'data_vg': 'ceph-54671a7c-dad9-563e-9508-4448c9acfc6a'})  2026-03-05 00:43:46.029215 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:46.029235 | orchestrator | 2026-03-05 00:43:46.029247 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-05 00:43:46.029258 | orchestrator | Thursday 05 March 2026 00:43:44 +0000 (0:00:00.166) 0:00:49.747 ******** 2026-03-05 00:43:46.029269 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-130794de-baff-5f0b-9c30-9a8206b73831', 'data_vg': 'ceph-130794de-baff-5f0b-9c30-9a8206b73831'})  2026-03-05 00:43:46.029300 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54671a7c-dad9-563e-9508-4448c9acfc6a', 'data_vg': 'ceph-54671a7c-dad9-563e-9508-4448c9acfc6a'})  2026-03-05 00:43:46.029311 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:46.029322 | orchestrator | 2026-03-05 00:43:46.029333 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-05 00:43:46.029344 | orchestrator | Thursday 05 March 2026 00:43:44 +0000 (0:00:00.175) 0:00:49.923 ******** 2026-03-05 00:43:46.029355 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-130794de-baff-5f0b-9c30-9a8206b73831', 'data_vg': 'ceph-130794de-baff-5f0b-9c30-9a8206b73831'})  2026-03-05 00:43:46.029366 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54671a7c-dad9-563e-9508-4448c9acfc6a', 'data_vg': 'ceph-54671a7c-dad9-563e-9508-4448c9acfc6a'})  2026-03-05 00:43:46.029377 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:46.029388 | orchestrator | 2026-03-05 00:43:46.029399 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-05 00:43:46.029409 | orchestrator | Thursday 05 March 2026 00:43:44 +0000 (0:00:00.151) 0:00:50.074 ******** 2026-03-05 00:43:46.029420 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:43:46.029431 | orchestrator | 2026-03-05 00:43:46.029442 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-05 00:43:46.029453 | orchestrator | Thursday 05 March 2026 00:43:44 +0000 (0:00:00.499) 0:00:50.573 ******** 2026-03-05 00:43:46.029463 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:43:46.029474 | orchestrator | 2026-03-05 00:43:46.029485 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-05 00:43:46.029496 | orchestrator | Thursday 05 March 2026 00:43:45 +0000 (0:00:00.521) 0:00:51.095 ******** 2026-03-05 00:43:46.029506 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:43:46.029517 | orchestrator | 2026-03-05 00:43:46.029528 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-05 00:43:46.029538 | orchestrator | Thursday 05 March 2026 00:43:45 +0000 (0:00:00.155) 0:00:51.250 ******** 2026-03-05 00:43:46.029549 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-130794de-baff-5f0b-9c30-9a8206b73831', 'vg_name': 'ceph-130794de-baff-5f0b-9c30-9a8206b73831'}) 2026-03-05 00:43:46.029562 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-54671a7c-dad9-563e-9508-4448c9acfc6a', 'vg_name': 'ceph-54671a7c-dad9-563e-9508-4448c9acfc6a'}) 2026-03-05 00:43:46.029573 | orchestrator | 2026-03-05 00:43:46.029584 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-05 00:43:46.029594 | orchestrator | Thursday 05 March 2026 00:43:45 +0000 (0:00:00.175) 0:00:51.426 ******** 2026-03-05 00:43:46.029605 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-130794de-baff-5f0b-9c30-9a8206b73831', 'data_vg': 'ceph-130794de-baff-5f0b-9c30-9a8206b73831'})  2026-03-05 00:43:46.029616 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54671a7c-dad9-563e-9508-4448c9acfc6a', 'data_vg': 'ceph-54671a7c-dad9-563e-9508-4448c9acfc6a'})  2026-03-05 00:43:46.029626 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:46.029637 | orchestrator | 2026-03-05 00:43:46.029648 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-05 00:43:46.029659 | orchestrator | Thursday 05 March 2026 00:43:45 +0000 (0:00:00.158) 0:00:51.585 ******** 2026-03-05 00:43:46.029670 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-130794de-baff-5f0b-9c30-9a8206b73831', 'data_vg': 'ceph-130794de-baff-5f0b-9c30-9a8206b73831'})  2026-03-05 00:43:46.029691 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54671a7c-dad9-563e-9508-4448c9acfc6a', 'data_vg': 'ceph-54671a7c-dad9-563e-9508-4448c9acfc6a'})  2026-03-05 00:43:52.343999 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:52.344265 | orchestrator | 2026-03-05 00:43:52.344333 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-05 00:43:52.344348 | orchestrator | Thursday 05 March 2026 00:43:46 +0000 (0:00:00.162) 0:00:51.748 ******** 2026-03-05 00:43:52.344361 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-130794de-baff-5f0b-9c30-9a8206b73831', 'data_vg': 'ceph-130794de-baff-5f0b-9c30-9a8206b73831'})  2026-03-05 00:43:52.344376 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54671a7c-dad9-563e-9508-4448c9acfc6a', 'data_vg': 'ceph-54671a7c-dad9-563e-9508-4448c9acfc6a'})  2026-03-05 00:43:52.344387 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:52.344398 | orchestrator | 2026-03-05 00:43:52.344409 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-05 00:43:52.344420 | orchestrator | Thursday 05 March 2026 00:43:46 +0000 (0:00:00.155) 0:00:51.904 ******** 2026-03-05 00:43:52.344431 | orchestrator | ok: [testbed-node-4] => { 2026-03-05 00:43:52.344442 | orchestrator |  "lvm_report": { 2026-03-05 00:43:52.344455 | orchestrator |  "lv": [ 2026-03-05 00:43:52.344468 | orchestrator |  { 2026-03-05 00:43:52.344480 | orchestrator |  "lv_name": "osd-block-130794de-baff-5f0b-9c30-9a8206b73831", 2026-03-05 00:43:52.344495 | orchestrator |  "vg_name": "ceph-130794de-baff-5f0b-9c30-9a8206b73831" 2026-03-05 00:43:52.344507 | orchestrator |  }, 2026-03-05 00:43:52.344519 | orchestrator |  { 2026-03-05 00:43:52.344532 | orchestrator |  "lv_name": "osd-block-54671a7c-dad9-563e-9508-4448c9acfc6a", 2026-03-05 00:43:52.344544 | orchestrator |  "vg_name": "ceph-54671a7c-dad9-563e-9508-4448c9acfc6a" 2026-03-05 00:43:52.344556 | orchestrator |  } 2026-03-05 00:43:52.344569 | orchestrator |  ], 2026-03-05 00:43:52.344580 | orchestrator |  "pv": [ 2026-03-05 00:43:52.344593 | orchestrator |  { 2026-03-05 00:43:52.344605 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-05 00:43:52.344636 | orchestrator |  "vg_name": "ceph-130794de-baff-5f0b-9c30-9a8206b73831" 2026-03-05 00:43:52.344649 | orchestrator |  }, 2026-03-05 00:43:52.344661 | orchestrator |  { 2026-03-05 00:43:52.344674 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-05 00:43:52.344686 | orchestrator |  "vg_name": "ceph-54671a7c-dad9-563e-9508-4448c9acfc6a" 2026-03-05 00:43:52.344699 | orchestrator |  } 2026-03-05 00:43:52.344711 | orchestrator |  ] 2026-03-05 00:43:52.344724 | orchestrator |  } 2026-03-05 00:43:52.344736 | orchestrator | } 2026-03-05 00:43:52.344749 | orchestrator | 2026-03-05 00:43:52.344761 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-05 00:43:52.344774 | orchestrator | 2026-03-05 00:43:52.344787 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-05 00:43:52.344799 | orchestrator | Thursday 05 March 2026 00:43:46 +0000 (0:00:00.504) 0:00:52.409 ******** 2026-03-05 00:43:52.344812 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-05 00:43:52.344825 | orchestrator | 2026-03-05 00:43:52.344839 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-05 00:43:52.344850 | orchestrator | Thursday 05 March 2026 00:43:47 +0000 (0:00:00.263) 0:00:52.672 ******** 2026-03-05 00:43:52.344860 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:43:52.344871 | orchestrator | 2026-03-05 00:43:52.344882 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:52.344893 | orchestrator | Thursday 05 March 2026 00:43:47 +0000 (0:00:00.240) 0:00:52.912 ******** 2026-03-05 00:43:52.344904 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-05 00:43:52.344915 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-05 00:43:52.344925 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-05 00:43:52.344936 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-05 00:43:52.344954 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-05 00:43:52.344965 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-05 00:43:52.344976 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-05 00:43:52.344986 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-05 00:43:52.344997 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-05 00:43:52.345013 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-05 00:43:52.345024 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-05 00:43:52.345035 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-05 00:43:52.345045 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-05 00:43:52.345075 | orchestrator | 2026-03-05 00:43:52.345087 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:52.345098 | orchestrator | Thursday 05 March 2026 00:43:47 +0000 (0:00:00.449) 0:00:53.362 ******** 2026-03-05 00:43:52.345108 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:43:52.345119 | orchestrator | 2026-03-05 00:43:52.345130 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:52.345140 | orchestrator | Thursday 05 March 2026 00:43:47 +0000 (0:00:00.188) 0:00:53.550 ******** 2026-03-05 00:43:52.345151 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:43:52.345162 | orchestrator | 2026-03-05 00:43:52.345173 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:52.345207 | orchestrator | Thursday 05 March 2026 00:43:48 +0000 (0:00:00.206) 0:00:53.757 ******** 2026-03-05 00:43:52.345218 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:43:52.345229 | orchestrator | 2026-03-05 00:43:52.345240 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:52.345251 | orchestrator | Thursday 05 March 2026 00:43:48 +0000 (0:00:00.215) 0:00:53.973 ******** 2026-03-05 00:43:52.345262 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:43:52.345272 | orchestrator | 2026-03-05 00:43:52.345283 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:52.345294 | orchestrator | Thursday 05 March 2026 00:43:48 +0000 (0:00:00.212) 0:00:54.185 ******** 2026-03-05 00:43:52.345305 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:43:52.345315 | orchestrator | 2026-03-05 00:43:52.345326 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:52.345337 | orchestrator | Thursday 05 March 2026 00:43:49 +0000 (0:00:00.686) 0:00:54.872 ******** 2026-03-05 00:43:52.345347 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:43:52.345358 | orchestrator | 2026-03-05 00:43:52.345369 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:52.345380 | orchestrator | Thursday 05 March 2026 00:43:49 +0000 (0:00:00.195) 0:00:55.068 ******** 2026-03-05 00:43:52.345391 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:43:52.345401 | orchestrator | 2026-03-05 00:43:52.345412 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:52.345423 | orchestrator | Thursday 05 March 2026 00:43:49 +0000 (0:00:00.232) 0:00:55.300 ******** 2026-03-05 00:43:52.345433 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:43:52.345444 | orchestrator | 2026-03-05 00:43:52.345455 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:52.345466 | orchestrator | Thursday 05 March 2026 00:43:49 +0000 (0:00:00.220) 0:00:55.520 ******** 2026-03-05 00:43:52.345476 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68) 2026-03-05 00:43:52.345489 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68) 2026-03-05 00:43:52.345508 | orchestrator | 2026-03-05 00:43:52.345519 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:52.345530 | orchestrator | Thursday 05 March 2026 00:43:50 +0000 (0:00:00.414) 0:00:55.935 ******** 2026-03-05 00:43:52.345541 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_51d29519-c1f9-43c2-8da2-810d6ee2cf1d) 2026-03-05 00:43:52.345552 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_51d29519-c1f9-43c2-8da2-810d6ee2cf1d) 2026-03-05 00:43:52.345563 | orchestrator | 2026-03-05 00:43:52.345574 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:52.345585 | orchestrator | Thursday 05 March 2026 00:43:50 +0000 (0:00:00.435) 0:00:56.371 ******** 2026-03-05 00:43:52.345595 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_01b35dfc-cc13-430f-9521-065aaefb7085) 2026-03-05 00:43:52.345606 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_01b35dfc-cc13-430f-9521-065aaefb7085) 2026-03-05 00:43:52.345617 | orchestrator | 2026-03-05 00:43:52.345628 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:52.345638 | orchestrator | Thursday 05 March 2026 00:43:51 +0000 (0:00:00.438) 0:00:56.809 ******** 2026-03-05 00:43:52.345649 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f3d47084-7273-4e4c-b048-5cf25f7ffc67) 2026-03-05 00:43:52.345660 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f3d47084-7273-4e4c-b048-5cf25f7ffc67) 2026-03-05 00:43:52.345671 | orchestrator | 2026-03-05 00:43:52.345681 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:43:52.345692 | orchestrator | Thursday 05 March 2026 00:43:51 +0000 (0:00:00.457) 0:00:57.267 ******** 2026-03-05 00:43:52.345703 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-05 00:43:52.345713 | orchestrator | 2026-03-05 00:43:52.345724 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:43:52.345735 | orchestrator | Thursday 05 March 2026 00:43:51 +0000 (0:00:00.340) 0:00:57.607 ******** 2026-03-05 00:43:52.345746 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-05 00:43:52.345756 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-05 00:43:52.345767 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-05 00:43:52.345778 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-05 00:43:52.345789 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-05 00:43:52.345799 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-05 00:43:52.345810 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-05 00:43:52.345821 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-05 00:43:52.345831 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-05 00:43:52.345842 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-05 00:43:52.345853 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-05 00:43:52.345871 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-05 00:44:01.625259 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-05 00:44:01.625406 | orchestrator | 2026-03-05 00:44:01.625423 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:01.625433 | orchestrator | Thursday 05 March 2026 00:43:52 +0000 (0:00:00.461) 0:00:58.069 ******** 2026-03-05 00:44:01.625468 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:01.625479 | orchestrator | 2026-03-05 00:44:01.625488 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:01.625497 | orchestrator | Thursday 05 March 2026 00:43:52 +0000 (0:00:00.207) 0:00:58.277 ******** 2026-03-05 00:44:01.625506 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:01.625515 | orchestrator | 2026-03-05 00:44:01.625577 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:01.625587 | orchestrator | Thursday 05 March 2026 00:43:53 +0000 (0:00:00.689) 0:00:58.966 ******** 2026-03-05 00:44:01.625596 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:01.625604 | orchestrator | 2026-03-05 00:44:01.625613 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:01.625622 | orchestrator | Thursday 05 March 2026 00:43:53 +0000 (0:00:00.201) 0:00:59.167 ******** 2026-03-05 00:44:01.625631 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:01.625639 | orchestrator | 2026-03-05 00:44:01.625648 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:01.625657 | orchestrator | Thursday 05 March 2026 00:43:53 +0000 (0:00:00.203) 0:00:59.371 ******** 2026-03-05 00:44:01.625665 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:01.625674 | orchestrator | 2026-03-05 00:44:01.625683 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:01.625691 | orchestrator | Thursday 05 March 2026 00:43:53 +0000 (0:00:00.217) 0:00:59.588 ******** 2026-03-05 00:44:01.625700 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:01.625709 | orchestrator | 2026-03-05 00:44:01.625722 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:01.625731 | orchestrator | Thursday 05 March 2026 00:43:54 +0000 (0:00:00.215) 0:00:59.804 ******** 2026-03-05 00:44:01.625741 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:01.625752 | orchestrator | 2026-03-05 00:44:01.625762 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:01.625772 | orchestrator | Thursday 05 March 2026 00:43:54 +0000 (0:00:00.249) 0:01:00.053 ******** 2026-03-05 00:44:01.625782 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:01.625793 | orchestrator | 2026-03-05 00:44:01.625803 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:01.625813 | orchestrator | Thursday 05 March 2026 00:43:54 +0000 (0:00:00.216) 0:01:00.270 ******** 2026-03-05 00:44:01.625823 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-05 00:44:01.625835 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-05 00:44:01.625847 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-05 00:44:01.625857 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-05 00:44:01.625867 | orchestrator | 2026-03-05 00:44:01.625878 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:01.625889 | orchestrator | Thursday 05 March 2026 00:43:55 +0000 (0:00:00.662) 0:01:00.932 ******** 2026-03-05 00:44:01.625899 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:01.625909 | orchestrator | 2026-03-05 00:44:01.625920 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:01.625930 | orchestrator | Thursday 05 March 2026 00:43:55 +0000 (0:00:00.222) 0:01:01.155 ******** 2026-03-05 00:44:01.625940 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:01.625951 | orchestrator | 2026-03-05 00:44:01.625961 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:01.625975 | orchestrator | Thursday 05 March 2026 00:43:55 +0000 (0:00:00.200) 0:01:01.356 ******** 2026-03-05 00:44:01.625990 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:01.626004 | orchestrator | 2026-03-05 00:44:01.626138 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:01.626163 | orchestrator | Thursday 05 March 2026 00:43:55 +0000 (0:00:00.207) 0:01:01.563 ******** 2026-03-05 00:44:01.626190 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:01.626205 | orchestrator | 2026-03-05 00:44:01.626219 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-05 00:44:01.626230 | orchestrator | Thursday 05 March 2026 00:43:56 +0000 (0:00:00.218) 0:01:01.781 ******** 2026-03-05 00:44:01.626251 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:01.626271 | orchestrator | 2026-03-05 00:44:01.626283 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-05 00:44:01.626298 | orchestrator | Thursday 05 March 2026 00:43:56 +0000 (0:00:00.346) 0:01:02.127 ******** 2026-03-05 00:44:01.626312 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'}}) 2026-03-05 00:44:01.626325 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '56dff28b-2239-50bc-bb4f-66f9aa80ba88'}}) 2026-03-05 00:44:01.626338 | orchestrator | 2026-03-05 00:44:01.626353 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-05 00:44:01.626367 | orchestrator | Thursday 05 March 2026 00:43:56 +0000 (0:00:00.200) 0:01:02.328 ******** 2026-03-05 00:44:01.626382 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15', 'data_vg': 'ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'}) 2026-03-05 00:44:01.626398 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-56dff28b-2239-50bc-bb4f-66f9aa80ba88', 'data_vg': 'ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88'}) 2026-03-05 00:44:01.626413 | orchestrator | 2026-03-05 00:44:01.626428 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-05 00:44:01.626467 | orchestrator | Thursday 05 March 2026 00:43:58 +0000 (0:00:01.792) 0:01:04.120 ******** 2026-03-05 00:44:01.626484 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15', 'data_vg': 'ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'})  2026-03-05 00:44:01.626501 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56dff28b-2239-50bc-bb4f-66f9aa80ba88', 'data_vg': 'ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88'})  2026-03-05 00:44:01.626517 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:01.626532 | orchestrator | 2026-03-05 00:44:01.626547 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-05 00:44:01.626562 | orchestrator | Thursday 05 March 2026 00:43:58 +0000 (0:00:00.167) 0:01:04.287 ******** 2026-03-05 00:44:01.626577 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15', 'data_vg': 'ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'}) 2026-03-05 00:44:01.626592 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-56dff28b-2239-50bc-bb4f-66f9aa80ba88', 'data_vg': 'ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88'}) 2026-03-05 00:44:01.626607 | orchestrator | 2026-03-05 00:44:01.626620 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-05 00:44:01.626634 | orchestrator | Thursday 05 March 2026 00:44:00 +0000 (0:00:01.349) 0:01:05.637 ******** 2026-03-05 00:44:01.626649 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15', 'data_vg': 'ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'})  2026-03-05 00:44:01.626663 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56dff28b-2239-50bc-bb4f-66f9aa80ba88', 'data_vg': 'ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88'})  2026-03-05 00:44:01.628173 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:01.628255 | orchestrator | 2026-03-05 00:44:01.628272 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-05 00:44:01.628289 | orchestrator | Thursday 05 March 2026 00:44:00 +0000 (0:00:00.160) 0:01:05.797 ******** 2026-03-05 00:44:01.628305 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:01.628320 | orchestrator | 2026-03-05 00:44:01.628335 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-05 00:44:01.628351 | orchestrator | Thursday 05 March 2026 00:44:00 +0000 (0:00:00.156) 0:01:05.954 ******** 2026-03-05 00:44:01.628388 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15', 'data_vg': 'ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'})  2026-03-05 00:44:01.628406 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56dff28b-2239-50bc-bb4f-66f9aa80ba88', 'data_vg': 'ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88'})  2026-03-05 00:44:01.628423 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:01.628439 | orchestrator | 2026-03-05 00:44:01.628454 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-05 00:44:01.628469 | orchestrator | Thursday 05 March 2026 00:44:00 +0000 (0:00:00.158) 0:01:06.113 ******** 2026-03-05 00:44:01.628485 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:01.628500 | orchestrator | 2026-03-05 00:44:01.628514 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-05 00:44:01.628529 | orchestrator | Thursday 05 March 2026 00:44:00 +0000 (0:00:00.135) 0:01:06.248 ******** 2026-03-05 00:44:01.628546 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15', 'data_vg': 'ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'})  2026-03-05 00:44:01.628563 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56dff28b-2239-50bc-bb4f-66f9aa80ba88', 'data_vg': 'ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88'})  2026-03-05 00:44:01.628581 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:01.628596 | orchestrator | 2026-03-05 00:44:01.628611 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-05 00:44:01.628627 | orchestrator | Thursday 05 March 2026 00:44:00 +0000 (0:00:00.166) 0:01:06.415 ******** 2026-03-05 00:44:01.628641 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:01.628657 | orchestrator | 2026-03-05 00:44:01.628673 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-05 00:44:01.628688 | orchestrator | Thursday 05 March 2026 00:44:01 +0000 (0:00:00.231) 0:01:06.647 ******** 2026-03-05 00:44:01.628703 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15', 'data_vg': 'ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'})  2026-03-05 00:44:01.628732 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56dff28b-2239-50bc-bb4f-66f9aa80ba88', 'data_vg': 'ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88'})  2026-03-05 00:44:01.628748 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:01.628763 | orchestrator | 2026-03-05 00:44:01.628779 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-05 00:44:01.628793 | orchestrator | Thursday 05 March 2026 00:44:01 +0000 (0:00:00.151) 0:01:06.798 ******** 2026-03-05 00:44:01.628809 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:44:01.628825 | orchestrator | 2026-03-05 00:44:01.628841 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-05 00:44:01.628857 | orchestrator | Thursday 05 March 2026 00:44:01 +0000 (0:00:00.382) 0:01:07.181 ******** 2026-03-05 00:44:01.628897 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15', 'data_vg': 'ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'})  2026-03-05 00:44:07.987158 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56dff28b-2239-50bc-bb4f-66f9aa80ba88', 'data_vg': 'ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88'})  2026-03-05 00:44:07.987285 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.987302 | orchestrator | 2026-03-05 00:44:07.987314 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-05 00:44:07.987325 | orchestrator | Thursday 05 March 2026 00:44:01 +0000 (0:00:00.188) 0:01:07.369 ******** 2026-03-05 00:44:07.987334 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15', 'data_vg': 'ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'})  2026-03-05 00:44:07.987344 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56dff28b-2239-50bc-bb4f-66f9aa80ba88', 'data_vg': 'ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88'})  2026-03-05 00:44:07.987445 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.987456 | orchestrator | 2026-03-05 00:44:07.987465 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-05 00:44:07.987474 | orchestrator | Thursday 05 March 2026 00:44:01 +0000 (0:00:00.179) 0:01:07.548 ******** 2026-03-05 00:44:07.987483 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15', 'data_vg': 'ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'})  2026-03-05 00:44:07.987492 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56dff28b-2239-50bc-bb4f-66f9aa80ba88', 'data_vg': 'ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88'})  2026-03-05 00:44:07.987501 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.987509 | orchestrator | 2026-03-05 00:44:07.987518 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-05 00:44:07.987541 | orchestrator | Thursday 05 March 2026 00:44:02 +0000 (0:00:00.160) 0:01:07.709 ******** 2026-03-05 00:44:07.987550 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.987558 | orchestrator | 2026-03-05 00:44:07.987567 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-05 00:44:07.987576 | orchestrator | Thursday 05 March 2026 00:44:02 +0000 (0:00:00.182) 0:01:07.891 ******** 2026-03-05 00:44:07.987584 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.987593 | orchestrator | 2026-03-05 00:44:07.987601 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-05 00:44:07.987610 | orchestrator | Thursday 05 March 2026 00:44:02 +0000 (0:00:00.157) 0:01:08.049 ******** 2026-03-05 00:44:07.987619 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.987627 | orchestrator | 2026-03-05 00:44:07.987636 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-05 00:44:07.987644 | orchestrator | Thursday 05 March 2026 00:44:02 +0000 (0:00:00.143) 0:01:08.192 ******** 2026-03-05 00:44:07.987653 | orchestrator | ok: [testbed-node-5] => { 2026-03-05 00:44:07.987664 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-05 00:44:07.987675 | orchestrator | } 2026-03-05 00:44:07.987688 | orchestrator | 2026-03-05 00:44:07.987699 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-05 00:44:07.987709 | orchestrator | Thursday 05 March 2026 00:44:02 +0000 (0:00:00.153) 0:01:08.346 ******** 2026-03-05 00:44:07.987719 | orchestrator | ok: [testbed-node-5] => { 2026-03-05 00:44:07.987729 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-05 00:44:07.987739 | orchestrator | } 2026-03-05 00:44:07.987749 | orchestrator | 2026-03-05 00:44:07.987759 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-05 00:44:07.987769 | orchestrator | Thursday 05 March 2026 00:44:02 +0000 (0:00:00.153) 0:01:08.500 ******** 2026-03-05 00:44:07.987779 | orchestrator | ok: [testbed-node-5] => { 2026-03-05 00:44:07.987789 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-05 00:44:07.987800 | orchestrator | } 2026-03-05 00:44:07.987809 | orchestrator | 2026-03-05 00:44:07.987820 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-05 00:44:07.987830 | orchestrator | Thursday 05 March 2026 00:44:03 +0000 (0:00:00.200) 0:01:08.700 ******** 2026-03-05 00:44:07.987840 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:44:07.987851 | orchestrator | 2026-03-05 00:44:07.987861 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-05 00:44:07.987871 | orchestrator | Thursday 05 March 2026 00:44:03 +0000 (0:00:00.515) 0:01:09.216 ******** 2026-03-05 00:44:07.987882 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:44:07.987891 | orchestrator | 2026-03-05 00:44:07.987901 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-05 00:44:07.987911 | orchestrator | Thursday 05 March 2026 00:44:04 +0000 (0:00:00.533) 0:01:09.749 ******** 2026-03-05 00:44:07.987921 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:44:07.987937 | orchestrator | 2026-03-05 00:44:07.987948 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-05 00:44:07.987958 | orchestrator | Thursday 05 March 2026 00:44:04 +0000 (0:00:00.738) 0:01:10.488 ******** 2026-03-05 00:44:07.987968 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:44:07.987977 | orchestrator | 2026-03-05 00:44:07.987988 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-05 00:44:07.987998 | orchestrator | Thursday 05 March 2026 00:44:05 +0000 (0:00:00.153) 0:01:10.641 ******** 2026-03-05 00:44:07.988009 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.988020 | orchestrator | 2026-03-05 00:44:07.988028 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-05 00:44:07.988037 | orchestrator | Thursday 05 March 2026 00:44:05 +0000 (0:00:00.133) 0:01:10.774 ******** 2026-03-05 00:44:07.988073 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.988089 | orchestrator | 2026-03-05 00:44:07.988104 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-05 00:44:07.988120 | orchestrator | Thursday 05 March 2026 00:44:05 +0000 (0:00:00.130) 0:01:10.905 ******** 2026-03-05 00:44:07.988134 | orchestrator | ok: [testbed-node-5] => { 2026-03-05 00:44:07.988145 | orchestrator |  "vgs_report": { 2026-03-05 00:44:07.988155 | orchestrator |  "vg": [] 2026-03-05 00:44:07.988180 | orchestrator |  } 2026-03-05 00:44:07.988190 | orchestrator | } 2026-03-05 00:44:07.988199 | orchestrator | 2026-03-05 00:44:07.988221 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-05 00:44:07.988230 | orchestrator | Thursday 05 March 2026 00:44:05 +0000 (0:00:00.146) 0:01:11.052 ******** 2026-03-05 00:44:07.988248 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.988257 | orchestrator | 2026-03-05 00:44:07.988266 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-05 00:44:07.988275 | orchestrator | Thursday 05 March 2026 00:44:05 +0000 (0:00:00.140) 0:01:11.193 ******** 2026-03-05 00:44:07.988283 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.988292 | orchestrator | 2026-03-05 00:44:07.988300 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-05 00:44:07.988309 | orchestrator | Thursday 05 March 2026 00:44:05 +0000 (0:00:00.147) 0:01:11.340 ******** 2026-03-05 00:44:07.988317 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.988326 | orchestrator | 2026-03-05 00:44:07.988334 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-05 00:44:07.988343 | orchestrator | Thursday 05 March 2026 00:44:05 +0000 (0:00:00.137) 0:01:11.478 ******** 2026-03-05 00:44:07.988351 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.988360 | orchestrator | 2026-03-05 00:44:07.988368 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-05 00:44:07.988377 | orchestrator | Thursday 05 March 2026 00:44:05 +0000 (0:00:00.154) 0:01:11.632 ******** 2026-03-05 00:44:07.988385 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.988394 | orchestrator | 2026-03-05 00:44:07.988403 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-05 00:44:07.988411 | orchestrator | Thursday 05 March 2026 00:44:06 +0000 (0:00:00.131) 0:01:11.764 ******** 2026-03-05 00:44:07.988419 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.988428 | orchestrator | 2026-03-05 00:44:07.988436 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-05 00:44:07.988445 | orchestrator | Thursday 05 March 2026 00:44:06 +0000 (0:00:00.131) 0:01:11.896 ******** 2026-03-05 00:44:07.988454 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.988463 | orchestrator | 2026-03-05 00:44:07.988471 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-05 00:44:07.988480 | orchestrator | Thursday 05 March 2026 00:44:06 +0000 (0:00:00.130) 0:01:12.026 ******** 2026-03-05 00:44:07.988488 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.988497 | orchestrator | 2026-03-05 00:44:07.988505 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-05 00:44:07.988521 | orchestrator | Thursday 05 March 2026 00:44:06 +0000 (0:00:00.350) 0:01:12.377 ******** 2026-03-05 00:44:07.988530 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.988538 | orchestrator | 2026-03-05 00:44:07.988547 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-05 00:44:07.988556 | orchestrator | Thursday 05 March 2026 00:44:06 +0000 (0:00:00.146) 0:01:12.524 ******** 2026-03-05 00:44:07.988564 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.988573 | orchestrator | 2026-03-05 00:44:07.988581 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-05 00:44:07.988590 | orchestrator | Thursday 05 March 2026 00:44:07 +0000 (0:00:00.149) 0:01:12.673 ******** 2026-03-05 00:44:07.988599 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.988607 | orchestrator | 2026-03-05 00:44:07.988616 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-05 00:44:07.988624 | orchestrator | Thursday 05 March 2026 00:44:07 +0000 (0:00:00.152) 0:01:12.826 ******** 2026-03-05 00:44:07.988632 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.988641 | orchestrator | 2026-03-05 00:44:07.988650 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-05 00:44:07.988658 | orchestrator | Thursday 05 March 2026 00:44:07 +0000 (0:00:00.131) 0:01:12.957 ******** 2026-03-05 00:44:07.988667 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.988675 | orchestrator | 2026-03-05 00:44:07.988684 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-05 00:44:07.988692 | orchestrator | Thursday 05 March 2026 00:44:07 +0000 (0:00:00.144) 0:01:13.102 ******** 2026-03-05 00:44:07.988701 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.988709 | orchestrator | 2026-03-05 00:44:07.988718 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-05 00:44:07.988726 | orchestrator | Thursday 05 March 2026 00:44:07 +0000 (0:00:00.134) 0:01:13.237 ******** 2026-03-05 00:44:07.988735 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15', 'data_vg': 'ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'})  2026-03-05 00:44:07.988744 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56dff28b-2239-50bc-bb4f-66f9aa80ba88', 'data_vg': 'ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88'})  2026-03-05 00:44:07.988753 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.988761 | orchestrator | 2026-03-05 00:44:07.988770 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-05 00:44:07.988778 | orchestrator | Thursday 05 March 2026 00:44:07 +0000 (0:00:00.158) 0:01:13.395 ******** 2026-03-05 00:44:07.988787 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15', 'data_vg': 'ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'})  2026-03-05 00:44:07.988795 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56dff28b-2239-50bc-bb4f-66f9aa80ba88', 'data_vg': 'ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88'})  2026-03-05 00:44:07.988804 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.988813 | orchestrator | 2026-03-05 00:44:07.988821 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-05 00:44:07.988830 | orchestrator | Thursday 05 March 2026 00:44:07 +0000 (0:00:00.151) 0:01:13.547 ******** 2026-03-05 00:44:07.988845 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15', 'data_vg': 'ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'})  2026-03-05 00:44:11.215245 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56dff28b-2239-50bc-bb4f-66f9aa80ba88', 'data_vg': 'ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88'})  2026-03-05 00:44:11.215397 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:11.215415 | orchestrator | 2026-03-05 00:44:11.215429 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-05 00:44:11.215442 | orchestrator | Thursday 05 March 2026 00:44:08 +0000 (0:00:00.158) 0:01:13.705 ******** 2026-03-05 00:44:11.215507 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15', 'data_vg': 'ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'})  2026-03-05 00:44:11.215521 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56dff28b-2239-50bc-bb4f-66f9aa80ba88', 'data_vg': 'ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88'})  2026-03-05 00:44:11.215531 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:11.215542 | orchestrator | 2026-03-05 00:44:11.215554 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-05 00:44:11.215565 | orchestrator | Thursday 05 March 2026 00:44:08 +0000 (0:00:00.152) 0:01:13.857 ******** 2026-03-05 00:44:11.215576 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15', 'data_vg': 'ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'})  2026-03-05 00:44:11.215819 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56dff28b-2239-50bc-bb4f-66f9aa80ba88', 'data_vg': 'ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88'})  2026-03-05 00:44:11.215941 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:11.215951 | orchestrator | 2026-03-05 00:44:11.215960 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-05 00:44:11.215969 | orchestrator | Thursday 05 March 2026 00:44:08 +0000 (0:00:00.159) 0:01:14.017 ******** 2026-03-05 00:44:11.215974 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15', 'data_vg': 'ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'})  2026-03-05 00:44:11.215981 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56dff28b-2239-50bc-bb4f-66f9aa80ba88', 'data_vg': 'ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88'})  2026-03-05 00:44:11.215986 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:11.215992 | orchestrator | 2026-03-05 00:44:11.215998 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-05 00:44:11.216003 | orchestrator | Thursday 05 March 2026 00:44:08 +0000 (0:00:00.418) 0:01:14.436 ******** 2026-03-05 00:44:11.216009 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15', 'data_vg': 'ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'})  2026-03-05 00:44:11.216015 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56dff28b-2239-50bc-bb4f-66f9aa80ba88', 'data_vg': 'ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88'})  2026-03-05 00:44:11.216020 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:11.216026 | orchestrator | 2026-03-05 00:44:11.216031 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-05 00:44:11.216036 | orchestrator | Thursday 05 March 2026 00:44:08 +0000 (0:00:00.176) 0:01:14.613 ******** 2026-03-05 00:44:11.216066 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15', 'data_vg': 'ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'})  2026-03-05 00:44:11.216072 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56dff28b-2239-50bc-bb4f-66f9aa80ba88', 'data_vg': 'ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88'})  2026-03-05 00:44:11.216077 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:11.216082 | orchestrator | 2026-03-05 00:44:11.216088 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-05 00:44:11.216093 | orchestrator | Thursday 05 March 2026 00:44:09 +0000 (0:00:00.165) 0:01:14.778 ******** 2026-03-05 00:44:11.216099 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:44:11.216105 | orchestrator | 2026-03-05 00:44:11.216111 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-05 00:44:11.216116 | orchestrator | Thursday 05 March 2026 00:44:09 +0000 (0:00:00.516) 0:01:15.295 ******** 2026-03-05 00:44:11.216121 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:44:11.216127 | orchestrator | 2026-03-05 00:44:11.216132 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-05 00:44:11.216158 | orchestrator | Thursday 05 March 2026 00:44:10 +0000 (0:00:00.560) 0:01:15.855 ******** 2026-03-05 00:44:11.216172 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:44:11.216177 | orchestrator | 2026-03-05 00:44:11.216182 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-05 00:44:11.216188 | orchestrator | Thursday 05 March 2026 00:44:10 +0000 (0:00:00.153) 0:01:16.009 ******** 2026-03-05 00:44:11.216194 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-56dff28b-2239-50bc-bb4f-66f9aa80ba88', 'vg_name': 'ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88'}) 2026-03-05 00:44:11.216201 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15', 'vg_name': 'ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'}) 2026-03-05 00:44:11.216206 | orchestrator | 2026-03-05 00:44:11.216212 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-05 00:44:11.216217 | orchestrator | Thursday 05 March 2026 00:44:10 +0000 (0:00:00.176) 0:01:16.185 ******** 2026-03-05 00:44:11.216249 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15', 'data_vg': 'ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'})  2026-03-05 00:44:11.216255 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56dff28b-2239-50bc-bb4f-66f9aa80ba88', 'data_vg': 'ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88'})  2026-03-05 00:44:11.216260 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:11.216266 | orchestrator | 2026-03-05 00:44:11.216271 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-05 00:44:11.216276 | orchestrator | Thursday 05 March 2026 00:44:10 +0000 (0:00:00.162) 0:01:16.348 ******** 2026-03-05 00:44:11.216282 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15', 'data_vg': 'ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'})  2026-03-05 00:44:11.216287 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56dff28b-2239-50bc-bb4f-66f9aa80ba88', 'data_vg': 'ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88'})  2026-03-05 00:44:11.216292 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:11.216298 | orchestrator | 2026-03-05 00:44:11.216303 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-05 00:44:11.216308 | orchestrator | Thursday 05 March 2026 00:44:10 +0000 (0:00:00.159) 0:01:16.508 ******** 2026-03-05 00:44:11.216313 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15', 'data_vg': 'ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'})  2026-03-05 00:44:11.216329 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56dff28b-2239-50bc-bb4f-66f9aa80ba88', 'data_vg': 'ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88'})  2026-03-05 00:44:11.216334 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:11.216340 | orchestrator | 2026-03-05 00:44:11.216345 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-05 00:44:11.216350 | orchestrator | Thursday 05 March 2026 00:44:11 +0000 (0:00:00.175) 0:01:16.683 ******** 2026-03-05 00:44:11.216356 | orchestrator | ok: [testbed-node-5] => { 2026-03-05 00:44:11.216361 | orchestrator |  "lvm_report": { 2026-03-05 00:44:11.216367 | orchestrator |  "lv": [ 2026-03-05 00:44:11.216373 | orchestrator |  { 2026-03-05 00:44:11.216378 | orchestrator |  "lv_name": "osd-block-56dff28b-2239-50bc-bb4f-66f9aa80ba88", 2026-03-05 00:44:11.216384 | orchestrator |  "vg_name": "ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88" 2026-03-05 00:44:11.216390 | orchestrator |  }, 2026-03-05 00:44:11.216395 | orchestrator |  { 2026-03-05 00:44:11.216400 | orchestrator |  "lv_name": "osd-block-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15", 2026-03-05 00:44:11.216406 | orchestrator |  "vg_name": "ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15" 2026-03-05 00:44:11.216411 | orchestrator |  } 2026-03-05 00:44:11.216416 | orchestrator |  ], 2026-03-05 00:44:11.216422 | orchestrator |  "pv": [ 2026-03-05 00:44:11.216432 | orchestrator |  { 2026-03-05 00:44:11.216437 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-05 00:44:11.216443 | orchestrator |  "vg_name": "ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15" 2026-03-05 00:44:11.216448 | orchestrator |  }, 2026-03-05 00:44:11.216453 | orchestrator |  { 2026-03-05 00:44:11.216458 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-05 00:44:11.216464 | orchestrator |  "vg_name": "ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88" 2026-03-05 00:44:11.216469 | orchestrator |  } 2026-03-05 00:44:11.216474 | orchestrator |  ] 2026-03-05 00:44:11.216480 | orchestrator |  } 2026-03-05 00:44:11.216485 | orchestrator | } 2026-03-05 00:44:11.216491 | orchestrator | 2026-03-05 00:44:11.216496 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:44:11.216502 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-05 00:44:11.216507 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-05 00:44:11.216513 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-05 00:44:11.216518 | orchestrator | 2026-03-05 00:44:11.216523 | orchestrator | 2026-03-05 00:44:11.216528 | orchestrator | 2026-03-05 00:44:11.216534 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:44:11.216539 | orchestrator | Thursday 05 March 2026 00:44:11 +0000 (0:00:00.152) 0:01:16.836 ******** 2026-03-05 00:44:11.216544 | orchestrator | =============================================================================== 2026-03-05 00:44:11.216550 | orchestrator | Create block VGs -------------------------------------------------------- 5.61s 2026-03-05 00:44:11.216555 | orchestrator | Create block LVs -------------------------------------------------------- 4.20s 2026-03-05 00:44:11.216560 | orchestrator | Add known partitions to the list of available block devices ------------- 1.83s 2026-03-05 00:44:11.216566 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.76s 2026-03-05 00:44:11.216571 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.75s 2026-03-05 00:44:11.216576 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.61s 2026-03-05 00:44:11.216581 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.56s 2026-03-05 00:44:11.216587 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.54s 2026-03-05 00:44:11.216596 | orchestrator | Add known links to the list of available block devices ------------------ 1.47s 2026-03-05 00:44:11.642557 | orchestrator | Add known partitions to the list of available block devices ------------- 1.09s 2026-03-05 00:44:11.642720 | orchestrator | Add known partitions to the list of available block devices ------------- 1.00s 2026-03-05 00:44:11.642745 | orchestrator | Print LVM report data --------------------------------------------------- 0.97s 2026-03-05 00:44:11.642761 | orchestrator | Add known links to the list of available block devices ------------------ 0.90s 2026-03-05 00:44:11.642777 | orchestrator | Add known links to the list of available block devices ------------------ 0.89s 2026-03-05 00:44:11.642793 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.81s 2026-03-05 00:44:11.642810 | orchestrator | Get initial list of available block devices ----------------------------- 0.77s 2026-03-05 00:44:11.642826 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.76s 2026-03-05 00:44:11.642842 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.73s 2026-03-05 00:44:11.642858 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.72s 2026-03-05 00:44:11.642875 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-03-05 00:44:24.113213 | orchestrator | 2026-03-05 00:44:24 | INFO  | Prepare task for execution of facts. 2026-03-05 00:44:24.193775 | orchestrator | 2026-03-05 00:44:24 | INFO  | Task 714fc2c0-c99e-4d89-83e6-92807f034b3a (facts) was prepared for execution. 2026-03-05 00:44:24.194162 | orchestrator | 2026-03-05 00:44:24 | INFO  | It takes a moment until task 714fc2c0-c99e-4d89-83e6-92807f034b3a (facts) has been started and output is visible here. 2026-03-05 00:44:37.711708 | orchestrator | 2026-03-05 00:44:37.712513 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-05 00:44:37.712534 | orchestrator | 2026-03-05 00:44:37.712541 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-05 00:44:37.712546 | orchestrator | Thursday 05 March 2026 00:44:28 +0000 (0:00:00.303) 0:00:00.303 ******** 2026-03-05 00:44:37.712551 | orchestrator | ok: [testbed-manager] 2026-03-05 00:44:37.712556 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:44:37.712561 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:44:37.712566 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:44:37.712570 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:44:37.712575 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:44:37.712579 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:44:37.712583 | orchestrator | 2026-03-05 00:44:37.712588 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-05 00:44:37.712592 | orchestrator | Thursday 05 March 2026 00:44:29 +0000 (0:00:01.114) 0:00:01.418 ******** 2026-03-05 00:44:37.712597 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:44:37.712602 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:44:37.712606 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:44:37.712610 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:44:37.712614 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:37.712618 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:44:37.712623 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:37.712627 | orchestrator | 2026-03-05 00:44:37.712631 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-05 00:44:37.712635 | orchestrator | 2026-03-05 00:44:37.712640 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-05 00:44:37.712644 | orchestrator | Thursday 05 March 2026 00:44:31 +0000 (0:00:01.375) 0:00:02.793 ******** 2026-03-05 00:44:37.712648 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:44:37.712652 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:44:37.712656 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:44:37.712661 | orchestrator | ok: [testbed-manager] 2026-03-05 00:44:37.712665 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:44:37.712669 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:44:37.712673 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:44:37.712677 | orchestrator | 2026-03-05 00:44:37.712681 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-05 00:44:37.712685 | orchestrator | 2026-03-05 00:44:37.712690 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-05 00:44:37.712694 | orchestrator | Thursday 05 March 2026 00:44:36 +0000 (0:00:05.679) 0:00:08.473 ******** 2026-03-05 00:44:37.712698 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:44:37.712702 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:44:37.712706 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:44:37.712710 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:44:37.712714 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:37.712718 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:44:37.712723 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:37.712727 | orchestrator | 2026-03-05 00:44:37.712731 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:44:37.712736 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:44:37.712741 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:44:37.712759 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:44:37.712764 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:44:37.712768 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:44:37.712772 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:44:37.712776 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:44:37.712780 | orchestrator | 2026-03-05 00:44:37.712785 | orchestrator | 2026-03-05 00:44:37.712789 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:44:37.712793 | orchestrator | Thursday 05 March 2026 00:44:37 +0000 (0:00:00.513) 0:00:08.987 ******** 2026-03-05 00:44:37.712797 | orchestrator | =============================================================================== 2026-03-05 00:44:37.712801 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.68s 2026-03-05 00:44:37.712806 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.38s 2026-03-05 00:44:37.712810 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.12s 2026-03-05 00:44:37.712814 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2026-03-05 00:44:50.195767 | orchestrator | 2026-03-05 00:44:50 | INFO  | Prepare task for execution of frr. 2026-03-05 00:44:50.265642 | orchestrator | 2026-03-05 00:44:50 | INFO  | Task a5cc2fb8-db89-4b26-8529-fff973a466ec (frr) was prepared for execution. 2026-03-05 00:44:50.265744 | orchestrator | 2026-03-05 00:44:50 | INFO  | It takes a moment until task a5cc2fb8-db89-4b26-8529-fff973a466ec (frr) has been started and output is visible here. 2026-03-05 00:45:16.500447 | orchestrator | 2026-03-05 00:45:16.500579 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-05 00:45:16.500598 | orchestrator | 2026-03-05 00:45:16.500611 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-05 00:45:16.500624 | orchestrator | Thursday 05 March 2026 00:44:54 +0000 (0:00:00.238) 0:00:00.238 ******** 2026-03-05 00:45:16.500635 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-05 00:45:16.500648 | orchestrator | 2026-03-05 00:45:16.500659 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-05 00:45:16.500670 | orchestrator | Thursday 05 March 2026 00:44:55 +0000 (0:00:00.238) 0:00:00.477 ******** 2026-03-05 00:45:16.500681 | orchestrator | changed: [testbed-manager] 2026-03-05 00:45:16.500692 | orchestrator | 2026-03-05 00:45:16.500703 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-05 00:45:16.500714 | orchestrator | Thursday 05 March 2026 00:44:56 +0000 (0:00:01.227) 0:00:01.704 ******** 2026-03-05 00:45:16.500725 | orchestrator | changed: [testbed-manager] 2026-03-05 00:45:16.500736 | orchestrator | 2026-03-05 00:45:16.500747 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-05 00:45:16.500757 | orchestrator | Thursday 05 March 2026 00:45:06 +0000 (0:00:09.941) 0:00:11.646 ******** 2026-03-05 00:45:16.500768 | orchestrator | ok: [testbed-manager] 2026-03-05 00:45:16.500780 | orchestrator | 2026-03-05 00:45:16.500790 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-05 00:45:16.500801 | orchestrator | Thursday 05 March 2026 00:45:07 +0000 (0:00:01.064) 0:00:12.710 ******** 2026-03-05 00:45:16.500812 | orchestrator | changed: [testbed-manager] 2026-03-05 00:45:16.500919 | orchestrator | 2026-03-05 00:45:16.500944 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-05 00:45:16.500963 | orchestrator | Thursday 05 March 2026 00:45:08 +0000 (0:00:00.875) 0:00:13.586 ******** 2026-03-05 00:45:16.501007 | orchestrator | ok: [testbed-manager] 2026-03-05 00:45:16.501029 | orchestrator | 2026-03-05 00:45:16.501046 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-03-05 00:45:16.501063 | orchestrator | Thursday 05 March 2026 00:45:09 +0000 (0:00:01.081) 0:00:14.668 ******** 2026-03-05 00:45:16.501080 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:45:16.501097 | orchestrator | 2026-03-05 00:45:16.501116 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-03-05 00:45:16.501132 | orchestrator | Thursday 05 March 2026 00:45:09 +0000 (0:00:00.149) 0:00:14.817 ******** 2026-03-05 00:45:16.501149 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:45:16.501167 | orchestrator | 2026-03-05 00:45:16.501184 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-03-05 00:45:16.501201 | orchestrator | Thursday 05 March 2026 00:45:09 +0000 (0:00:00.148) 0:00:14.965 ******** 2026-03-05 00:45:16.501220 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:45:16.501238 | orchestrator | 2026-03-05 00:45:16.501256 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-05 00:45:16.501277 | orchestrator | Thursday 05 March 2026 00:45:09 +0000 (0:00:00.145) 0:00:15.111 ******** 2026-03-05 00:45:16.501296 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:45:16.501315 | orchestrator | 2026-03-05 00:45:16.501330 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-05 00:45:16.501341 | orchestrator | Thursday 05 March 2026 00:45:09 +0000 (0:00:00.145) 0:00:15.256 ******** 2026-03-05 00:45:16.501352 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:45:16.501362 | orchestrator | 2026-03-05 00:45:16.501373 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-05 00:45:16.501384 | orchestrator | Thursday 05 March 2026 00:45:09 +0000 (0:00:00.142) 0:00:15.399 ******** 2026-03-05 00:45:16.501395 | orchestrator | changed: [testbed-manager] 2026-03-05 00:45:16.501405 | orchestrator | 2026-03-05 00:45:16.501416 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-05 00:45:16.501427 | orchestrator | Thursday 05 March 2026 00:45:11 +0000 (0:00:01.165) 0:00:16.564 ******** 2026-03-05 00:45:16.501438 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-05 00:45:16.501449 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-05 00:45:16.501461 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-05 00:45:16.501472 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-05 00:45:16.501483 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-05 00:45:16.501494 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-05 00:45:16.501504 | orchestrator | 2026-03-05 00:45:16.501515 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-05 00:45:16.501526 | orchestrator | Thursday 05 March 2026 00:45:13 +0000 (0:00:02.369) 0:00:18.934 ******** 2026-03-05 00:45:16.501537 | orchestrator | ok: [testbed-manager] 2026-03-05 00:45:16.501547 | orchestrator | 2026-03-05 00:45:16.501558 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-05 00:45:16.501569 | orchestrator | Thursday 05 March 2026 00:45:14 +0000 (0:00:01.267) 0:00:20.201 ******** 2026-03-05 00:45:16.501579 | orchestrator | changed: [testbed-manager] 2026-03-05 00:45:16.501590 | orchestrator | 2026-03-05 00:45:16.501601 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:45:16.501653 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-05 00:45:16.501665 | orchestrator | 2026-03-05 00:45:16.501676 | orchestrator | 2026-03-05 00:45:16.501709 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:45:16.501720 | orchestrator | Thursday 05 March 2026 00:45:16 +0000 (0:00:01.418) 0:00:21.619 ******** 2026-03-05 00:45:16.501731 | orchestrator | =============================================================================== 2026-03-05 00:45:16.501742 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.94s 2026-03-05 00:45:16.501752 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.37s 2026-03-05 00:45:16.501763 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.42s 2026-03-05 00:45:16.501774 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.27s 2026-03-05 00:45:16.501785 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.23s 2026-03-05 00:45:16.501796 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.17s 2026-03-05 00:45:16.501806 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.08s 2026-03-05 00:45:16.501817 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.06s 2026-03-05 00:45:16.501828 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.88s 2026-03-05 00:45:16.501838 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.24s 2026-03-05 00:45:16.501849 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.15s 2026-03-05 00:45:16.501860 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.15s 2026-03-05 00:45:16.501870 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.15s 2026-03-05 00:45:16.501881 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.15s 2026-03-05 00:45:16.501892 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.14s 2026-03-05 00:45:16.811060 | orchestrator | 2026-03-05 00:45:16.814356 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Thu Mar 5 00:45:16 UTC 2026 2026-03-05 00:45:16.814451 | orchestrator | 2026-03-05 00:45:18.814533 | orchestrator | 2026-03-05 00:45:18 | INFO  | Collection nutshell is prepared for execution 2026-03-05 00:45:18.814632 | orchestrator | 2026-03-05 00:45:18 | INFO  | A [0] - dotfiles 2026-03-05 00:45:28.843432 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [0] - homer 2026-03-05 00:45:28.843519 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [0] - netdata 2026-03-05 00:45:28.843530 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [0] - openstackclient 2026-03-05 00:45:28.843548 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [0] - phpmyadmin 2026-03-05 00:45:28.844049 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [0] - common 2026-03-05 00:45:28.849834 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [1] -- loadbalancer 2026-03-05 00:45:28.849893 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [2] --- opensearch 2026-03-05 00:45:28.849906 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [2] --- mariadb-ng 2026-03-05 00:45:28.849917 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [3] ---- horizon 2026-03-05 00:45:28.849929 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [3] ---- keystone 2026-03-05 00:45:28.850117 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [4] ----- neutron 2026-03-05 00:45:28.850139 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [5] ------ wait-for-nova 2026-03-05 00:45:28.850683 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [6] ------- octavia 2026-03-05 00:45:28.852655 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [4] ----- barbican 2026-03-05 00:45:28.852941 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [4] ----- designate 2026-03-05 00:45:28.853006 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [4] ----- ironic 2026-03-05 00:45:28.854667 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [4] ----- placement 2026-03-05 00:45:28.854702 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [4] ----- magnum 2026-03-05 00:45:28.854786 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [1] -- openvswitch 2026-03-05 00:45:28.854798 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [2] --- ovn 2026-03-05 00:45:28.854810 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [1] -- memcached 2026-03-05 00:45:28.854827 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [1] -- redis 2026-03-05 00:45:28.855012 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [1] -- rabbitmq-ng 2026-03-05 00:45:28.855613 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [0] - kubernetes 2026-03-05 00:45:28.858249 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [1] -- kubeconfig 2026-03-05 00:45:28.858377 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [1] -- copy-kubeconfig 2026-03-05 00:45:28.858685 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [0] - ceph 2026-03-05 00:45:28.861069 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [1] -- ceph-pools 2026-03-05 00:45:28.861113 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [2] --- copy-ceph-keys 2026-03-05 00:45:28.861408 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [3] ---- cephclient 2026-03-05 00:45:28.861432 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-05 00:45:28.861643 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [4] ----- wait-for-keystone 2026-03-05 00:45:28.861814 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-05 00:45:28.862075 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [5] ------ glance 2026-03-05 00:45:28.862214 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [5] ------ cinder 2026-03-05 00:45:28.862413 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [5] ------ nova 2026-03-05 00:45:28.862827 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [4] ----- prometheus 2026-03-05 00:45:28.863137 | orchestrator | 2026-03-05 00:45:28 | INFO  | A [5] ------ grafana 2026-03-05 00:45:29.113810 | orchestrator | 2026-03-05 00:45:29 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-05 00:45:29.114868 | orchestrator | 2026-03-05 00:45:29 | INFO  | Tasks are running in the background 2026-03-05 00:45:32.779343 | orchestrator | 2026-03-05 00:45:32 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-05 00:45:34.918839 | orchestrator | 2026-03-05 00:45:34 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:45:34.918924 | orchestrator | 2026-03-05 00:45:34 | INFO  | Task a03daf8a-954d-4ba2-91a4-b9a3638eda28 is in state STARTED 2026-03-05 00:45:34.919314 | orchestrator | 2026-03-05 00:45:34 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:45:34.920216 | orchestrator | 2026-03-05 00:45:34 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:45:34.920602 | orchestrator | 2026-03-05 00:45:34 | INFO  | Task 74cce24a-416f-4c45-a7ec-1bcbb55f7987 is in state STARTED 2026-03-05 00:45:34.921114 | orchestrator | 2026-03-05 00:45:34 | INFO  | Task 748efb9b-ee82-42a8-ae05-fb8a16f3d61c is in state STARTED 2026-03-05 00:45:34.921768 | orchestrator | 2026-03-05 00:45:34 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:45:34.921848 | orchestrator | 2026-03-05 00:45:34 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:45:37.961194 | orchestrator | 2026-03-05 00:45:37 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:45:37.962247 | orchestrator | 2026-03-05 00:45:37 | INFO  | Task a03daf8a-954d-4ba2-91a4-b9a3638eda28 is in state STARTED 2026-03-05 00:45:37.962597 | orchestrator | 2026-03-05 00:45:37 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:45:37.963230 | orchestrator | 2026-03-05 00:45:37 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:45:37.964574 | orchestrator | 2026-03-05 00:45:37 | INFO  | Task 74cce24a-416f-4c45-a7ec-1bcbb55f7987 is in state STARTED 2026-03-05 00:45:37.965888 | orchestrator | 2026-03-05 00:45:37 | INFO  | Task 748efb9b-ee82-42a8-ae05-fb8a16f3d61c is in state STARTED 2026-03-05 00:45:37.966466 | orchestrator | 2026-03-05 00:45:37 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:45:37.966501 | orchestrator | 2026-03-05 00:45:37 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:45:41.003871 | orchestrator | 2026-03-05 00:45:41 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:45:41.004037 | orchestrator | 2026-03-05 00:45:41 | INFO  | Task a03daf8a-954d-4ba2-91a4-b9a3638eda28 is in state STARTED 2026-03-05 00:45:41.006228 | orchestrator | 2026-03-05 00:45:41 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:45:41.006555 | orchestrator | 2026-03-05 00:45:41 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:45:41.007116 | orchestrator | 2026-03-05 00:45:41 | INFO  | Task 74cce24a-416f-4c45-a7ec-1bcbb55f7987 is in state STARTED 2026-03-05 00:45:41.008844 | orchestrator | 2026-03-05 00:45:41 | INFO  | Task 748efb9b-ee82-42a8-ae05-fb8a16f3d61c is in state STARTED 2026-03-05 00:45:41.009481 | orchestrator | 2026-03-05 00:45:41 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:45:41.009517 | orchestrator | 2026-03-05 00:45:41 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:45:44.074768 | orchestrator | 2026-03-05 00:45:44 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:45:44.076155 | orchestrator | 2026-03-05 00:45:44 | INFO  | Task a03daf8a-954d-4ba2-91a4-b9a3638eda28 is in state STARTED 2026-03-05 00:45:44.077351 | orchestrator | 2026-03-05 00:45:44 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:45:44.080254 | orchestrator | 2026-03-05 00:45:44 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:45:44.080299 | orchestrator | 2026-03-05 00:45:44 | INFO  | Task 74cce24a-416f-4c45-a7ec-1bcbb55f7987 is in state STARTED 2026-03-05 00:45:44.081536 | orchestrator | 2026-03-05 00:45:44 | INFO  | Task 748efb9b-ee82-42a8-ae05-fb8a16f3d61c is in state STARTED 2026-03-05 00:45:44.081567 | orchestrator | 2026-03-05 00:45:44 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:45:44.081582 | orchestrator | 2026-03-05 00:45:44 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:45:47.275167 | orchestrator | 2026-03-05 00:45:47 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:45:47.275216 | orchestrator | 2026-03-05 00:45:47 | INFO  | Task a03daf8a-954d-4ba2-91a4-b9a3638eda28 is in state STARTED 2026-03-05 00:45:47.275221 | orchestrator | 2026-03-05 00:45:47 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:45:47.275225 | orchestrator | 2026-03-05 00:45:47 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:45:47.275250 | orchestrator | 2026-03-05 00:45:47 | INFO  | Task 74cce24a-416f-4c45-a7ec-1bcbb55f7987 is in state STARTED 2026-03-05 00:45:47.275254 | orchestrator | 2026-03-05 00:45:47 | INFO  | Task 748efb9b-ee82-42a8-ae05-fb8a16f3d61c is in state STARTED 2026-03-05 00:45:47.275258 | orchestrator | 2026-03-05 00:45:47 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:45:47.275269 | orchestrator | 2026-03-05 00:45:47 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:45:50.314272 | orchestrator | 2026-03-05 00:45:50 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:45:50.317267 | orchestrator | 2026-03-05 00:45:50 | INFO  | Task a03daf8a-954d-4ba2-91a4-b9a3638eda28 is in state STARTED 2026-03-05 00:45:50.323687 | orchestrator | 2026-03-05 00:45:50 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:45:50.324444 | orchestrator | 2026-03-05 00:45:50 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:45:50.326118 | orchestrator | 2026-03-05 00:45:50 | INFO  | Task 74cce24a-416f-4c45-a7ec-1bcbb55f7987 is in state STARTED 2026-03-05 00:45:50.327900 | orchestrator | 2026-03-05 00:45:50 | INFO  | Task 748efb9b-ee82-42a8-ae05-fb8a16f3d61c is in state STARTED 2026-03-05 00:45:50.331413 | orchestrator | 2026-03-05 00:45:50 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:45:50.331691 | orchestrator | 2026-03-05 00:45:50 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:45:53.612470 | orchestrator | 2026-03-05 00:45:53 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:45:53.612567 | orchestrator | 2026-03-05 00:45:53 | INFO  | Task a03daf8a-954d-4ba2-91a4-b9a3638eda28 is in state STARTED 2026-03-05 00:45:53.612582 | orchestrator | 2026-03-05 00:45:53 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:45:53.612594 | orchestrator | 2026-03-05 00:45:53 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:45:53.612606 | orchestrator | 2026-03-05 00:45:53 | INFO  | Task 74cce24a-416f-4c45-a7ec-1bcbb55f7987 is in state STARTED 2026-03-05 00:45:53.612617 | orchestrator | 2026-03-05 00:45:53 | INFO  | Task 748efb9b-ee82-42a8-ae05-fb8a16f3d61c is in state STARTED 2026-03-05 00:45:53.612629 | orchestrator | 2026-03-05 00:45:53 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:45:53.612641 | orchestrator | 2026-03-05 00:45:53 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:45:56.759783 | orchestrator | 2026-03-05 00:45:56 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:45:56.759859 | orchestrator | 2026-03-05 00:45:56 | INFO  | Task a03daf8a-954d-4ba2-91a4-b9a3638eda28 is in state STARTED 2026-03-05 00:45:56.759865 | orchestrator | 2026-03-05 00:45:56 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:45:56.759870 | orchestrator | 2026-03-05 00:45:56 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:45:56.759875 | orchestrator | 2026-03-05 00:45:56 | INFO  | Task 74cce24a-416f-4c45-a7ec-1bcbb55f7987 is in state STARTED 2026-03-05 00:45:56.759879 | orchestrator | 2026-03-05 00:45:56 | INFO  | Task 748efb9b-ee82-42a8-ae05-fb8a16f3d61c is in state STARTED 2026-03-05 00:45:56.759897 | orchestrator | 2026-03-05 00:45:56 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:45:56.759902 | orchestrator | 2026-03-05 00:45:56 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:45:59.903302 | orchestrator | 2026-03-05 00:45:59 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:45:59.903424 | orchestrator | 2026-03-05 00:45:59 | INFO  | Task a03daf8a-954d-4ba2-91a4-b9a3638eda28 is in state STARTED 2026-03-05 00:45:59.903441 | orchestrator | 2026-03-05 00:45:59 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:45:59.907602 | orchestrator | 2026-03-05 00:45:59 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:45:59.911134 | orchestrator | 2026-03-05 00:45:59 | INFO  | Task 74cce24a-416f-4c45-a7ec-1bcbb55f7987 is in state STARTED 2026-03-05 00:45:59.920171 | orchestrator | 2026-03-05 00:45:59 | INFO  | Task 748efb9b-ee82-42a8-ae05-fb8a16f3d61c is in state STARTED 2026-03-05 00:45:59.920249 | orchestrator | 2026-03-05 00:45:59 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:45:59.920271 | orchestrator | 2026-03-05 00:45:59 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:46:03.374117 | orchestrator | 2026-03-05 00:46:03 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:46:03.375619 | orchestrator | 2026-03-05 00:46:03 | INFO  | Task a03daf8a-954d-4ba2-91a4-b9a3638eda28 is in state SUCCESS 2026-03-05 00:46:03.375972 | orchestrator | 2026-03-05 00:46:03.375993 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-05 00:46:03.376000 | orchestrator | 2026-03-05 00:46:03.376007 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-05 00:46:03.376014 | orchestrator | Thursday 05 March 2026 00:45:44 +0000 (0:00:00.716) 0:00:00.716 ******** 2026-03-05 00:46:03.376021 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:46:03.376028 | orchestrator | changed: [testbed-manager] 2026-03-05 00:46:03.376034 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:46:03.376040 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:46:03.376047 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:46:03.376053 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:46:03.376059 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:46:03.376065 | orchestrator | 2026-03-05 00:46:03.376072 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-05 00:46:03.376078 | orchestrator | Thursday 05 March 2026 00:45:50 +0000 (0:00:05.657) 0:00:06.374 ******** 2026-03-05 00:46:03.376110 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-05 00:46:03.376117 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-05 00:46:03.376124 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-05 00:46:03.376130 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-05 00:46:03.376137 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-05 00:46:03.376143 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-05 00:46:03.376149 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-05 00:46:03.376156 | orchestrator | 2026-03-05 00:46:03.376162 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-05 00:46:03.376169 | orchestrator | Thursday 05 March 2026 00:45:51 +0000 (0:00:01.969) 0:00:08.343 ******** 2026-03-05 00:46:03.376179 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-05 00:45:51.363215', 'end': '2026-03-05 00:45:51.370636', 'delta': '0:00:00.007421', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-05 00:46:03.376241 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-05 00:45:51.280928', 'end': '2026-03-05 00:45:51.288114', 'delta': '0:00:00.007186', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-05 00:46:03.376250 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-05 00:45:51.251365', 'end': '2026-03-05 00:45:51.258545', 'delta': '0:00:00.007180', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-05 00:46:03.376273 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-05 00:45:51.270167', 'end': '2026-03-05 00:45:51.278855', 'delta': '0:00:00.008688', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-05 00:46:03.376280 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-05 00:45:51.298822', 'end': '2026-03-05 00:45:51.306635', 'delta': '0:00:00.007813', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-05 00:46:03.376287 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-05 00:45:51.797688', 'end': '2026-03-05 00:45:51.810782', 'delta': '0:00:00.013094', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-05 00:46:03.376299 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-05 00:45:51.803992', 'end': '2026-03-05 00:45:51.812217', 'delta': '0:00:00.008225', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-05 00:46:03.376305 | orchestrator | 2026-03-05 00:46:03.376312 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-05 00:46:03.376318 | orchestrator | Thursday 05 March 2026 00:45:53 +0000 (0:00:01.824) 0:00:10.167 ******** 2026-03-05 00:46:03.376325 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-05 00:46:03.376332 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-05 00:46:03.376338 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-05 00:46:03.376344 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-05 00:46:03.376350 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-05 00:46:03.376357 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-05 00:46:03.376363 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-05 00:46:03.376369 | orchestrator | 2026-03-05 00:46:03.376375 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-05 00:46:03.376381 | orchestrator | Thursday 05 March 2026 00:45:57 +0000 (0:00:03.585) 0:00:13.753 ******** 2026-03-05 00:46:03.376388 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-05 00:46:03.376394 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-05 00:46:03.376400 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-05 00:46:03.376406 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-05 00:46:03.376412 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-05 00:46:03.376418 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-05 00:46:03.376425 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-05 00:46:03.376431 | orchestrator | 2026-03-05 00:46:03.376437 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:46:03.376448 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:46:03.376465 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:46:03.376738 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:46:03.376748 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:46:03.376756 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:46:03.376763 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:46:03.376775 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:46:03.376783 | orchestrator | 2026-03-05 00:46:03.376791 | orchestrator | 2026-03-05 00:46:03.376798 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:46:03.376805 | orchestrator | Thursday 05 March 2026 00:46:00 +0000 (0:00:03.355) 0:00:17.108 ******** 2026-03-05 00:46:03.376813 | orchestrator | =============================================================================== 2026-03-05 00:46:03.376823 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 5.66s 2026-03-05 00:46:03.376832 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 3.59s 2026-03-05 00:46:03.376839 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.36s 2026-03-05 00:46:03.376847 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.97s 2026-03-05 00:46:03.376854 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.82s 2026-03-05 00:46:03.384683 | orchestrator | 2026-03-05 00:46:03 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:46:03.389049 | orchestrator | 2026-03-05 00:46:03 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:46:03.397793 | orchestrator | 2026-03-05 00:46:03 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:46:03.401459 | orchestrator | 2026-03-05 00:46:03 | INFO  | Task 74cce24a-416f-4c45-a7ec-1bcbb55f7987 is in state STARTED 2026-03-05 00:46:03.411323 | orchestrator | 2026-03-05 00:46:03 | INFO  | Task 748efb9b-ee82-42a8-ae05-fb8a16f3d61c is in state STARTED 2026-03-05 00:46:03.413883 | orchestrator | 2026-03-05 00:46:03 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:46:03.413969 | orchestrator | 2026-03-05 00:46:03 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:46:06.496797 | orchestrator | 2026-03-05 00:46:06 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:46:06.496876 | orchestrator | 2026-03-05 00:46:06 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:46:06.502116 | orchestrator | 2026-03-05 00:46:06 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:46:06.506195 | orchestrator | 2026-03-05 00:46:06 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:46:06.510202 | orchestrator | 2026-03-05 00:46:06 | INFO  | Task 74cce24a-416f-4c45-a7ec-1bcbb55f7987 is in state STARTED 2026-03-05 00:46:06.515905 | orchestrator | 2026-03-05 00:46:06 | INFO  | Task 748efb9b-ee82-42a8-ae05-fb8a16f3d61c is in state STARTED 2026-03-05 00:46:06.519127 | orchestrator | 2026-03-05 00:46:06 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:46:06.519215 | orchestrator | 2026-03-05 00:46:06 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:46:09.605748 | orchestrator | 2026-03-05 00:46:09 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:46:09.605881 | orchestrator | 2026-03-05 00:46:09 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:46:09.608572 | orchestrator | 2026-03-05 00:46:09 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:46:09.612346 | orchestrator | 2026-03-05 00:46:09 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:46:09.616687 | orchestrator | 2026-03-05 00:46:09 | INFO  | Task 74cce24a-416f-4c45-a7ec-1bcbb55f7987 is in state STARTED 2026-03-05 00:46:09.619627 | orchestrator | 2026-03-05 00:46:09 | INFO  | Task 748efb9b-ee82-42a8-ae05-fb8a16f3d61c is in state STARTED 2026-03-05 00:46:09.623700 | orchestrator | 2026-03-05 00:46:09 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:46:09.623763 | orchestrator | 2026-03-05 00:46:09 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:46:12.657847 | orchestrator | 2026-03-05 00:46:12 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:46:12.658135 | orchestrator | 2026-03-05 00:46:12 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:46:12.658805 | orchestrator | 2026-03-05 00:46:12 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:46:12.659445 | orchestrator | 2026-03-05 00:46:12 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:46:12.660068 | orchestrator | 2026-03-05 00:46:12 | INFO  | Task 74cce24a-416f-4c45-a7ec-1bcbb55f7987 is in state STARTED 2026-03-05 00:46:12.660772 | orchestrator | 2026-03-05 00:46:12 | INFO  | Task 748efb9b-ee82-42a8-ae05-fb8a16f3d61c is in state STARTED 2026-03-05 00:46:12.661340 | orchestrator | 2026-03-05 00:46:12 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:46:12.661358 | orchestrator | 2026-03-05 00:46:12 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:46:15.731944 | orchestrator | 2026-03-05 00:46:15 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:46:15.733557 | orchestrator | 2026-03-05 00:46:15 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:46:15.736558 | orchestrator | 2026-03-05 00:46:15 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:46:15.737680 | orchestrator | 2026-03-05 00:46:15 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:46:15.743095 | orchestrator | 2026-03-05 00:46:15 | INFO  | Task 74cce24a-416f-4c45-a7ec-1bcbb55f7987 is in state STARTED 2026-03-05 00:46:15.743649 | orchestrator | 2026-03-05 00:46:15 | INFO  | Task 748efb9b-ee82-42a8-ae05-fb8a16f3d61c is in state STARTED 2026-03-05 00:46:15.744683 | orchestrator | 2026-03-05 00:46:15 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:46:15.744813 | orchestrator | 2026-03-05 00:46:15 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:46:18.877570 | orchestrator | 2026-03-05 00:46:18 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:46:18.877628 | orchestrator | 2026-03-05 00:46:18 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:46:18.877639 | orchestrator | 2026-03-05 00:46:18 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:46:18.877647 | orchestrator | 2026-03-05 00:46:18 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:46:18.877653 | orchestrator | 2026-03-05 00:46:18 | INFO  | Task 74cce24a-416f-4c45-a7ec-1bcbb55f7987 is in state STARTED 2026-03-05 00:46:18.877660 | orchestrator | 2026-03-05 00:46:18 | INFO  | Task 748efb9b-ee82-42a8-ae05-fb8a16f3d61c is in state STARTED 2026-03-05 00:46:18.877666 | orchestrator | 2026-03-05 00:46:18 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:46:18.877673 | orchestrator | 2026-03-05 00:46:18 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:46:21.925614 | orchestrator | 2026-03-05 00:46:21 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:46:21.926990 | orchestrator | 2026-03-05 00:46:21 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:46:21.929579 | orchestrator | 2026-03-05 00:46:21 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:46:21.931353 | orchestrator | 2026-03-05 00:46:21 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:46:21.934084 | orchestrator | 2026-03-05 00:46:21 | INFO  | Task 74cce24a-416f-4c45-a7ec-1bcbb55f7987 is in state STARTED 2026-03-05 00:46:21.937037 | orchestrator | 2026-03-05 00:46:21 | INFO  | Task 748efb9b-ee82-42a8-ae05-fb8a16f3d61c is in state STARTED 2026-03-05 00:46:21.940702 | orchestrator | 2026-03-05 00:46:21 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:46:21.940757 | orchestrator | 2026-03-05 00:46:21 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:46:25.653032 | orchestrator | 2026-03-05 00:46:25 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:46:25.653091 | orchestrator | 2026-03-05 00:46:25 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:46:25.653097 | orchestrator | 2026-03-05 00:46:25 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:46:25.653101 | orchestrator | 2026-03-05 00:46:25 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:46:25.653105 | orchestrator | 2026-03-05 00:46:25 | INFO  | Task 74cce24a-416f-4c45-a7ec-1bcbb55f7987 is in state STARTED 2026-03-05 00:46:25.653110 | orchestrator | 2026-03-05 00:46:25 | INFO  | Task 748efb9b-ee82-42a8-ae05-fb8a16f3d61c is in state STARTED 2026-03-05 00:46:25.653114 | orchestrator | 2026-03-05 00:46:25 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:46:25.653118 | orchestrator | 2026-03-05 00:46:25 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:46:28.248892 | orchestrator | 2026-03-05 00:46:28 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:46:28.264885 | orchestrator | 2026-03-05 00:46:28 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:46:28.267887 | orchestrator | 2026-03-05 00:46:28 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:46:28.269207 | orchestrator | 2026-03-05 00:46:28 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:46:28.273471 | orchestrator | 2026-03-05 00:46:28 | INFO  | Task 74cce24a-416f-4c45-a7ec-1bcbb55f7987 is in state STARTED 2026-03-05 00:46:28.273517 | orchestrator | 2026-03-05 00:46:28 | INFO  | Task 748efb9b-ee82-42a8-ae05-fb8a16f3d61c is in state STARTED 2026-03-05 00:46:28.274679 | orchestrator | 2026-03-05 00:46:28 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:46:28.274701 | orchestrator | 2026-03-05 00:46:28 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:46:31.456937 | orchestrator | 2026-03-05 00:46:31 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:46:31.457021 | orchestrator | 2026-03-05 00:46:31 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:46:31.457030 | orchestrator | 2026-03-05 00:46:31 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:46:31.457038 | orchestrator | 2026-03-05 00:46:31 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:46:31.457045 | orchestrator | 2026-03-05 00:46:31 | INFO  | Task 74cce24a-416f-4c45-a7ec-1bcbb55f7987 is in state SUCCESS 2026-03-05 00:46:31.457082 | orchestrator | 2026-03-05 00:46:31 | INFO  | Task 748efb9b-ee82-42a8-ae05-fb8a16f3d61c is in state STARTED 2026-03-05 00:46:31.457089 | orchestrator | 2026-03-05 00:46:31 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:46:31.457096 | orchestrator | 2026-03-05 00:46:31 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:46:34.552175 | orchestrator | 2026-03-05 00:46:34 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:46:34.552292 | orchestrator | 2026-03-05 00:46:34 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:46:34.552308 | orchestrator | 2026-03-05 00:46:34 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:46:34.552320 | orchestrator | 2026-03-05 00:46:34 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:46:34.552331 | orchestrator | 2026-03-05 00:46:34 | INFO  | Task 748efb9b-ee82-42a8-ae05-fb8a16f3d61c is in state STARTED 2026-03-05 00:46:34.552342 | orchestrator | 2026-03-05 00:46:34 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:46:34.552353 | orchestrator | 2026-03-05 00:46:34 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:46:37.578869 | orchestrator | 2026-03-05 00:46:37 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:46:37.578967 | orchestrator | 2026-03-05 00:46:37 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:46:37.578976 | orchestrator | 2026-03-05 00:46:37 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:46:37.578983 | orchestrator | 2026-03-05 00:46:37 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:46:37.581260 | orchestrator | 2026-03-05 00:46:37 | INFO  | Task 748efb9b-ee82-42a8-ae05-fb8a16f3d61c is in state STARTED 2026-03-05 00:46:37.583110 | orchestrator | 2026-03-05 00:46:37 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:46:37.583520 | orchestrator | 2026-03-05 00:46:37 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:46:40.644880 | orchestrator | 2026-03-05 00:46:40 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:46:40.646757 | orchestrator | 2026-03-05 00:46:40 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:46:40.648334 | orchestrator | 2026-03-05 00:46:40 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:46:40.650168 | orchestrator | 2026-03-05 00:46:40 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:46:40.651759 | orchestrator | 2026-03-05 00:46:40 | INFO  | Task 748efb9b-ee82-42a8-ae05-fb8a16f3d61c is in state STARTED 2026-03-05 00:46:40.653355 | orchestrator | 2026-03-05 00:46:40 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:46:40.653423 | orchestrator | 2026-03-05 00:46:40 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:46:43.713756 | orchestrator | 2026-03-05 00:46:43 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:46:43.717866 | orchestrator | 2026-03-05 00:46:43 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:46:43.719847 | orchestrator | 2026-03-05 00:46:43 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:46:43.722291 | orchestrator | 2026-03-05 00:46:43 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:46:43.723777 | orchestrator | 2026-03-05 00:46:43 | INFO  | Task 748efb9b-ee82-42a8-ae05-fb8a16f3d61c is in state SUCCESS 2026-03-05 00:46:43.726131 | orchestrator | 2026-03-05 00:46:43 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:46:43.726164 | orchestrator | 2026-03-05 00:46:43 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:46:46.840968 | orchestrator | 2026-03-05 00:46:46 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:46:46.843026 | orchestrator | 2026-03-05 00:46:46 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:46:46.845268 | orchestrator | 2026-03-05 00:46:46 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:46:46.847573 | orchestrator | 2026-03-05 00:46:46 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:46:46.849761 | orchestrator | 2026-03-05 00:46:46 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:46:46.849807 | orchestrator | 2026-03-05 00:46:46 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:46:49.924541 | orchestrator | 2026-03-05 00:46:49 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:46:49.924683 | orchestrator | 2026-03-05 00:46:49 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:46:49.924709 | orchestrator | 2026-03-05 00:46:49 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:46:49.927057 | orchestrator | 2026-03-05 00:46:49 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:46:49.927660 | orchestrator | 2026-03-05 00:46:49 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:46:49.927863 | orchestrator | 2026-03-05 00:46:49 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:46:53.011757 | orchestrator | 2026-03-05 00:46:53 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:46:53.025189 | orchestrator | 2026-03-05 00:46:53 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:46:53.025259 | orchestrator | 2026-03-05 00:46:53 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:46:53.025266 | orchestrator | 2026-03-05 00:46:53 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:46:53.039991 | orchestrator | 2026-03-05 00:46:53 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:46:53.041971 | orchestrator | 2026-03-05 00:46:53 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:46:56.281094 | orchestrator | 2026-03-05 00:46:56 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:46:56.286716 | orchestrator | 2026-03-05 00:46:56 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:46:56.289442 | orchestrator | 2026-03-05 00:46:56 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:46:56.295800 | orchestrator | 2026-03-05 00:46:56 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:46:56.302310 | orchestrator | 2026-03-05 00:46:56 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:46:56.303368 | orchestrator | 2026-03-05 00:46:56 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:46:59.397022 | orchestrator | 2026-03-05 00:46:59 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:46:59.398370 | orchestrator | 2026-03-05 00:46:59 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:46:59.401522 | orchestrator | 2026-03-05 00:46:59 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:46:59.402493 | orchestrator | 2026-03-05 00:46:59 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:46:59.404326 | orchestrator | 2026-03-05 00:46:59 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:46:59.404380 | orchestrator | 2026-03-05 00:46:59 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:47:02.452125 | orchestrator | 2026-03-05 00:47:02 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:47:02.453516 | orchestrator | 2026-03-05 00:47:02 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:47:02.453652 | orchestrator | 2026-03-05 00:47:02 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:47:02.456008 | orchestrator | 2026-03-05 00:47:02 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:47:02.456291 | orchestrator | 2026-03-05 00:47:02 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:47:02.456365 | orchestrator | 2026-03-05 00:47:02 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:47:05.524243 | orchestrator | 2026-03-05 00:47:05 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:47:05.524317 | orchestrator | 2026-03-05 00:47:05 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:47:05.524322 | orchestrator | 2026-03-05 00:47:05 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:47:05.524327 | orchestrator | 2026-03-05 00:47:05 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:47:05.524331 | orchestrator | 2026-03-05 00:47:05 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:47:05.524335 | orchestrator | 2026-03-05 00:47:05 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:47:08.577599 | orchestrator | 2026-03-05 00:47:08 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:47:08.580561 | orchestrator | 2026-03-05 00:47:08 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:47:08.581039 | orchestrator | 2026-03-05 00:47:08 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:47:08.581740 | orchestrator | 2026-03-05 00:47:08 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:47:08.582800 | orchestrator | 2026-03-05 00:47:08 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:47:08.582874 | orchestrator | 2026-03-05 00:47:08 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:47:11.653599 | orchestrator | 2026-03-05 00:47:11 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:47:11.656271 | orchestrator | 2026-03-05 00:47:11 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:47:11.657010 | orchestrator | 2026-03-05 00:47:11 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:47:11.657713 | orchestrator | 2026-03-05 00:47:11 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:47:11.658735 | orchestrator | 2026-03-05 00:47:11 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:47:11.658767 | orchestrator | 2026-03-05 00:47:11 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:47:14.762745 | orchestrator | 2026-03-05 00:47:14 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:47:14.764443 | orchestrator | 2026-03-05 00:47:14 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:47:14.767045 | orchestrator | 2026-03-05 00:47:14 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:47:14.770201 | orchestrator | 2026-03-05 00:47:14 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:47:14.771624 | orchestrator | 2026-03-05 00:47:14 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:47:14.772023 | orchestrator | 2026-03-05 00:47:14 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:47:17.885156 | orchestrator | 2026-03-05 00:47:17 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:47:17.894149 | orchestrator | 2026-03-05 00:47:17 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:47:17.899605 | orchestrator | 2026-03-05 00:47:17 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:47:17.912012 | orchestrator | 2026-03-05 00:47:17 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:47:17.915394 | orchestrator | 2026-03-05 00:47:17 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:47:17.915654 | orchestrator | 2026-03-05 00:47:17 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:47:20.980007 | orchestrator | 2026-03-05 00:47:20 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:47:20.980930 | orchestrator | 2026-03-05 00:47:20 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state STARTED 2026-03-05 00:47:20.981632 | orchestrator | 2026-03-05 00:47:20 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:47:20.982574 | orchestrator | 2026-03-05 00:47:20 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:47:20.985021 | orchestrator | 2026-03-05 00:47:20 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:47:20.985058 | orchestrator | 2026-03-05 00:47:20 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:47:24.038395 | orchestrator | 2026-03-05 00:47:24 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:47:24.040145 | orchestrator | 2026-03-05 00:47:24.040207 | orchestrator | 2026-03-05 00:47:24.040216 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-05 00:47:24.040225 | orchestrator | 2026-03-05 00:47:24.040233 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-05 00:47:24.040241 | orchestrator | Thursday 05 March 2026 00:45:44 +0000 (0:00:01.204) 0:00:01.204 ******** 2026-03-05 00:47:24.040248 | orchestrator | ok: [testbed-manager] => { 2026-03-05 00:47:24.040258 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-05 00:47:24.040266 | orchestrator | } 2026-03-05 00:47:24.040273 | orchestrator | 2026-03-05 00:47:24.040279 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-05 00:47:24.040286 | orchestrator | Thursday 05 March 2026 00:45:45 +0000 (0:00:00.561) 0:00:01.766 ******** 2026-03-05 00:47:24.040293 | orchestrator | ok: [testbed-manager] 2026-03-05 00:47:24.040301 | orchestrator | 2026-03-05 00:47:24.040311 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-05 00:47:24.040317 | orchestrator | Thursday 05 March 2026 00:45:47 +0000 (0:00:01.653) 0:00:03.419 ******** 2026-03-05 00:47:24.040347 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-05 00:47:24.040354 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-05 00:47:24.040361 | orchestrator | 2026-03-05 00:47:24.040368 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-05 00:47:24.040375 | orchestrator | Thursday 05 March 2026 00:45:49 +0000 (0:00:02.058) 0:00:05.478 ******** 2026-03-05 00:47:24.040381 | orchestrator | changed: [testbed-manager] 2026-03-05 00:47:24.040387 | orchestrator | 2026-03-05 00:47:24.040442 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-05 00:47:24.040453 | orchestrator | Thursday 05 March 2026 00:45:53 +0000 (0:00:03.983) 0:00:09.462 ******** 2026-03-05 00:47:24.040461 | orchestrator | changed: [testbed-manager] 2026-03-05 00:47:24.040468 | orchestrator | 2026-03-05 00:47:24.040475 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-05 00:47:24.040482 | orchestrator | Thursday 05 March 2026 00:45:56 +0000 (0:00:02.998) 0:00:12.460 ******** 2026-03-05 00:47:24.040489 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-05 00:47:24.040496 | orchestrator | ok: [testbed-manager] 2026-03-05 00:47:24.040503 | orchestrator | 2026-03-05 00:47:24.040509 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-05 00:47:24.040516 | orchestrator | Thursday 05 March 2026 00:46:23 +0000 (0:00:27.762) 0:00:40.223 ******** 2026-03-05 00:47:24.040523 | orchestrator | changed: [testbed-manager] 2026-03-05 00:47:24.040529 | orchestrator | 2026-03-05 00:47:24.040536 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:47:24.040543 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:47:24.040552 | orchestrator | 2026-03-05 00:47:24.040558 | orchestrator | 2026-03-05 00:47:24.040565 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:47:24.040571 | orchestrator | Thursday 05 March 2026 00:46:27 +0000 (0:00:04.047) 0:00:44.271 ******** 2026-03-05 00:47:24.040578 | orchestrator | =============================================================================== 2026-03-05 00:47:24.040584 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 27.76s 2026-03-05 00:47:24.040591 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 4.05s 2026-03-05 00:47:24.040597 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.98s 2026-03-05 00:47:24.040603 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 3.00s 2026-03-05 00:47:24.040610 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.06s 2026-03-05 00:47:24.040616 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.65s 2026-03-05 00:47:24.040623 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.56s 2026-03-05 00:47:24.040629 | orchestrator | 2026-03-05 00:47:24.040636 | orchestrator | 2026-03-05 00:47:24.040642 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-05 00:47:24.040648 | orchestrator | 2026-03-05 00:47:24.040655 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-05 00:47:24.040661 | orchestrator | Thursday 05 March 2026 00:45:44 +0000 (0:00:00.474) 0:00:00.474 ******** 2026-03-05 00:47:24.040671 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-05 00:47:24.040679 | orchestrator | 2026-03-05 00:47:24.040685 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-05 00:47:24.040691 | orchestrator | Thursday 05 March 2026 00:45:45 +0000 (0:00:01.046) 0:00:01.521 ******** 2026-03-05 00:47:24.040698 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-05 00:47:24.040704 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-05 00:47:24.040718 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-05 00:47:24.040725 | orchestrator | 2026-03-05 00:47:24.040731 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-05 00:47:24.040738 | orchestrator | Thursday 05 March 2026 00:45:47 +0000 (0:00:02.318) 0:00:03.839 ******** 2026-03-05 00:47:24.040745 | orchestrator | changed: [testbed-manager] 2026-03-05 00:47:24.040751 | orchestrator | 2026-03-05 00:47:24.040759 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-05 00:47:24.040766 | orchestrator | Thursday 05 March 2026 00:45:50 +0000 (0:00:03.119) 0:00:06.959 ******** 2026-03-05 00:47:24.040787 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-05 00:47:24.040794 | orchestrator | ok: [testbed-manager] 2026-03-05 00:47:24.040801 | orchestrator | 2026-03-05 00:47:24.040807 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-05 00:47:24.040814 | orchestrator | Thursday 05 March 2026 00:46:32 +0000 (0:00:42.033) 0:00:48.993 ******** 2026-03-05 00:47:24.040821 | orchestrator | changed: [testbed-manager] 2026-03-05 00:47:24.040828 | orchestrator | 2026-03-05 00:47:24.040834 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-05 00:47:24.040841 | orchestrator | Thursday 05 March 2026 00:46:35 +0000 (0:00:02.326) 0:00:51.319 ******** 2026-03-05 00:47:24.040848 | orchestrator | ok: [testbed-manager] 2026-03-05 00:47:24.040855 | orchestrator | 2026-03-05 00:47:24.040862 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-05 00:47:24.040936 | orchestrator | Thursday 05 March 2026 00:46:36 +0000 (0:00:00.970) 0:00:52.289 ******** 2026-03-05 00:47:24.040945 | orchestrator | changed: [testbed-manager] 2026-03-05 00:47:24.040952 | orchestrator | 2026-03-05 00:47:24.040959 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-05 00:47:24.040965 | orchestrator | Thursday 05 March 2026 00:46:38 +0000 (0:00:02.704) 0:00:54.994 ******** 2026-03-05 00:47:24.040973 | orchestrator | changed: [testbed-manager] 2026-03-05 00:47:24.040980 | orchestrator | 2026-03-05 00:47:24.040987 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-05 00:47:24.040994 | orchestrator | Thursday 05 March 2026 00:46:39 +0000 (0:00:00.906) 0:00:55.901 ******** 2026-03-05 00:47:24.041002 | orchestrator | changed: [testbed-manager] 2026-03-05 00:47:24.041009 | orchestrator | 2026-03-05 00:47:24.041017 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-05 00:47:24.041024 | orchestrator | Thursday 05 March 2026 00:46:41 +0000 (0:00:01.646) 0:00:57.547 ******** 2026-03-05 00:47:24.041032 | orchestrator | ok: [testbed-manager] 2026-03-05 00:47:24.041039 | orchestrator | 2026-03-05 00:47:24.041047 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:47:24.041056 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:47:24.041064 | orchestrator | 2026-03-05 00:47:24.041072 | orchestrator | 2026-03-05 00:47:24.041080 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:47:24.041087 | orchestrator | Thursday 05 March 2026 00:46:41 +0000 (0:00:00.476) 0:00:58.024 ******** 2026-03-05 00:47:24.041095 | orchestrator | =============================================================================== 2026-03-05 00:47:24.041103 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 42.03s 2026-03-05 00:47:24.041109 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 3.12s 2026-03-05 00:47:24.041117 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.70s 2026-03-05 00:47:24.041123 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.33s 2026-03-05 00:47:24.041130 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.32s 2026-03-05 00:47:24.041143 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.65s 2026-03-05 00:47:24.041150 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.05s 2026-03-05 00:47:24.041157 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.97s 2026-03-05 00:47:24.041164 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.91s 2026-03-05 00:47:24.041171 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.48s 2026-03-05 00:47:24.041178 | orchestrator | 2026-03-05 00:47:24.041184 | orchestrator | 2026-03-05 00:47:24.041191 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-05 00:47:24.041198 | orchestrator | 2026-03-05 00:47:24.041204 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-05 00:47:24.041211 | orchestrator | Thursday 05 March 2026 00:46:09 +0000 (0:00:00.335) 0:00:00.335 ******** 2026-03-05 00:47:24.041218 | orchestrator | ok: [testbed-manager] 2026-03-05 00:47:24.041225 | orchestrator | 2026-03-05 00:47:24.041232 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-05 00:47:24.041239 | orchestrator | Thursday 05 March 2026 00:46:10 +0000 (0:00:01.182) 0:00:01.518 ******** 2026-03-05 00:47:24.041246 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-05 00:47:24.041253 | orchestrator | 2026-03-05 00:47:24.041260 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-05 00:47:24.041271 | orchestrator | Thursday 05 March 2026 00:46:11 +0000 (0:00:00.573) 0:00:02.092 ******** 2026-03-05 00:47:24.041278 | orchestrator | changed: [testbed-manager] 2026-03-05 00:47:24.041284 | orchestrator | 2026-03-05 00:47:24.041291 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-05 00:47:24.041298 | orchestrator | Thursday 05 March 2026 00:46:13 +0000 (0:00:01.550) 0:00:03.642 ******** 2026-03-05 00:47:24.041305 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-05 00:47:24.041311 | orchestrator | ok: [testbed-manager] 2026-03-05 00:47:24.041318 | orchestrator | 2026-03-05 00:47:24.041324 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-05 00:47:24.041331 | orchestrator | Thursday 05 March 2026 00:47:15 +0000 (0:01:02.606) 0:01:06.248 ******** 2026-03-05 00:47:24.041338 | orchestrator | changed: [testbed-manager] 2026-03-05 00:47:24.041345 | orchestrator | 2026-03-05 00:47:24.041351 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:47:24.041359 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:47:24.041365 | orchestrator | 2026-03-05 00:47:24.041372 | orchestrator | 2026-03-05 00:47:24.041379 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:47:24.041392 | orchestrator | Thursday 05 March 2026 00:47:20 +0000 (0:00:04.734) 0:01:10.983 ******** 2026-03-05 00:47:24.041400 | orchestrator | =============================================================================== 2026-03-05 00:47:24.041407 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 62.61s 2026-03-05 00:47:24.041414 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.73s 2026-03-05 00:47:24.041421 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.55s 2026-03-05 00:47:24.041427 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.18s 2026-03-05 00:47:24.041433 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.57s 2026-03-05 00:47:24.041440 | orchestrator | 2026-03-05 00:47:24 | INFO  | Task 94a7d02a-ddf9-4ae2-8162-e1f36dd06e03 is in state SUCCESS 2026-03-05 00:47:24.044179 | orchestrator | 2026-03-05 00:47:24 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:47:24.046438 | orchestrator | 2026-03-05 00:47:24 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:47:24.047448 | orchestrator | 2026-03-05 00:47:24 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:47:24.047522 | orchestrator | 2026-03-05 00:47:24 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:47:27.088639 | orchestrator | 2026-03-05 00:47:27 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:47:27.089532 | orchestrator | 2026-03-05 00:47:27 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:47:27.091134 | orchestrator | 2026-03-05 00:47:27 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:47:27.092284 | orchestrator | 2026-03-05 00:47:27 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:47:27.092317 | orchestrator | 2026-03-05 00:47:27 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:47:30.188207 | orchestrator | 2026-03-05 00:47:30 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:47:30.192092 | orchestrator | 2026-03-05 00:47:30 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:47:30.193344 | orchestrator | 2026-03-05 00:47:30 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:47:30.197064 | orchestrator | 2026-03-05 00:47:30 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:47:30.197118 | orchestrator | 2026-03-05 00:47:30 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:47:33.259306 | orchestrator | 2026-03-05 00:47:33 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:47:33.260034 | orchestrator | 2026-03-05 00:47:33 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:47:33.260910 | orchestrator | 2026-03-05 00:47:33 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:47:33.262495 | orchestrator | 2026-03-05 00:47:33 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:47:33.262736 | orchestrator | 2026-03-05 00:47:33 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:47:36.295001 | orchestrator | 2026-03-05 00:47:36 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:47:36.296824 | orchestrator | 2026-03-05 00:47:36 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:47:36.298964 | orchestrator | 2026-03-05 00:47:36 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:47:36.300939 | orchestrator | 2026-03-05 00:47:36 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:47:36.300968 | orchestrator | 2026-03-05 00:47:36 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:47:39.357369 | orchestrator | 2026-03-05 00:47:39 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:47:39.359202 | orchestrator | 2026-03-05 00:47:39 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:47:39.361327 | orchestrator | 2026-03-05 00:47:39 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:47:39.363109 | orchestrator | 2026-03-05 00:47:39 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:47:39.363154 | orchestrator | 2026-03-05 00:47:39 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:47:42.401204 | orchestrator | 2026-03-05 00:47:42 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state STARTED 2026-03-05 00:47:42.401959 | orchestrator | 2026-03-05 00:47:42 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:47:42.402731 | orchestrator | 2026-03-05 00:47:42 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:47:42.403820 | orchestrator | 2026-03-05 00:47:42 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:47:42.403841 | orchestrator | 2026-03-05 00:47:42 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:47:45.457681 | orchestrator | 2026-03-05 00:47:45.457738 | orchestrator | 2026-03-05 00:47:45.457750 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 00:47:45.457760 | orchestrator | 2026-03-05 00:47:45.457771 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 00:47:45.457780 | orchestrator | Thursday 05 March 2026 00:45:41 +0000 (0:00:00.570) 0:00:00.570 ******** 2026-03-05 00:47:45.457787 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-05 00:47:45.457794 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-05 00:47:45.457799 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-05 00:47:45.457805 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-05 00:47:45.457811 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-05 00:47:45.457817 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-05 00:47:45.457822 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-05 00:47:45.457828 | orchestrator | 2026-03-05 00:47:45.457834 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-05 00:47:45.457839 | orchestrator | 2026-03-05 00:47:45.457847 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-05 00:47:45.457898 | orchestrator | Thursday 05 March 2026 00:45:44 +0000 (0:00:02.965) 0:00:03.535 ******** 2026-03-05 00:47:45.457918 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2026-03-05 00:47:45.457928 | orchestrator | 2026-03-05 00:47:45.457934 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-05 00:47:45.457940 | orchestrator | Thursday 05 March 2026 00:45:48 +0000 (0:00:03.211) 0:00:06.746 ******** 2026-03-05 00:47:45.457946 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:47:45.457952 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:47:45.457958 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:47:45.457970 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:47:45.457977 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:47:45.457983 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:47:45.457989 | orchestrator | ok: [testbed-manager] 2026-03-05 00:47:45.457995 | orchestrator | 2026-03-05 00:47:45.458001 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-05 00:47:45.458007 | orchestrator | Thursday 05 March 2026 00:45:50 +0000 (0:00:02.164) 0:00:08.911 ******** 2026-03-05 00:47:45.458048 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:47:45.458056 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:47:45.458062 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:47:45.458067 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:47:45.458074 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:47:45.458084 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:47:45.458094 | orchestrator | ok: [testbed-manager] 2026-03-05 00:47:45.458105 | orchestrator | 2026-03-05 00:47:45.458115 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-05 00:47:45.458162 | orchestrator | Thursday 05 March 2026 00:45:53 +0000 (0:00:03.493) 0:00:12.405 ******** 2026-03-05 00:47:45.458176 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:47:45.458186 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:47:45.458195 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:47:45.458223 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:47:45.458235 | orchestrator | changed: [testbed-manager] 2026-03-05 00:47:45.458246 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:47:45.458257 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:47:45.458264 | orchestrator | 2026-03-05 00:47:45.458271 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-05 00:47:45.458278 | orchestrator | Thursday 05 March 2026 00:45:57 +0000 (0:00:04.319) 0:00:16.724 ******** 2026-03-05 00:47:45.458285 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:47:45.458291 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:47:45.458298 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:47:45.458310 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:47:45.458317 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:47:45.458323 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:47:45.458330 | orchestrator | changed: [testbed-manager] 2026-03-05 00:47:45.458339 | orchestrator | 2026-03-05 00:47:45.458350 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-05 00:47:45.458361 | orchestrator | Thursday 05 March 2026 00:46:12 +0000 (0:00:15.018) 0:00:31.742 ******** 2026-03-05 00:47:45.458368 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:47:45.458375 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:47:45.458382 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:47:45.458388 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:47:45.458395 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:47:45.458401 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:47:45.458408 | orchestrator | changed: [testbed-manager] 2026-03-05 00:47:45.458414 | orchestrator | 2026-03-05 00:47:45.458421 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-05 00:47:45.458428 | orchestrator | Thursday 05 March 2026 00:47:08 +0000 (0:00:55.460) 0:01:27.203 ******** 2026-03-05 00:47:45.458435 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:47:45.458442 | orchestrator | 2026-03-05 00:47:45.458449 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-05 00:47:45.458456 | orchestrator | Thursday 05 March 2026 00:47:10 +0000 (0:00:01.704) 0:01:28.907 ******** 2026-03-05 00:47:45.458463 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-05 00:47:45.458470 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-05 00:47:45.458477 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-05 00:47:45.458483 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-05 00:47:45.458501 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-05 00:47:45.458508 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-05 00:47:45.458515 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-05 00:47:45.458522 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-05 00:47:45.458528 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-05 00:47:45.458535 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-05 00:47:45.458541 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-05 00:47:45.458550 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-05 00:47:45.458560 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-05 00:47:45.458570 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-05 00:47:45.458580 | orchestrator | 2026-03-05 00:47:45.458591 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-05 00:47:45.458602 | orchestrator | Thursday 05 March 2026 00:47:17 +0000 (0:00:07.717) 0:01:36.625 ******** 2026-03-05 00:47:45.458611 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:47:45.458620 | orchestrator | ok: [testbed-manager] 2026-03-05 00:47:45.458637 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:47:45.458648 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:47:45.458656 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:47:45.458666 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:47:45.458675 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:47:45.458686 | orchestrator | 2026-03-05 00:47:45.458696 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-05 00:47:45.458706 | orchestrator | Thursday 05 March 2026 00:47:19 +0000 (0:00:01.534) 0:01:38.160 ******** 2026-03-05 00:47:45.458716 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:47:45.458725 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:47:45.458735 | orchestrator | changed: [testbed-manager] 2026-03-05 00:47:45.458744 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:47:45.458754 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:47:45.458764 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:47:45.458774 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:47:45.458784 | orchestrator | 2026-03-05 00:47:45.458794 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-05 00:47:45.458804 | orchestrator | Thursday 05 March 2026 00:47:21 +0000 (0:00:02.155) 0:01:40.315 ******** 2026-03-05 00:47:45.458814 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:47:45.458822 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:47:45.458828 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:47:45.458834 | orchestrator | ok: [testbed-manager] 2026-03-05 00:47:45.458839 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:47:45.458845 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:47:45.458868 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:47:45.458874 | orchestrator | 2026-03-05 00:47:45.458880 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-05 00:47:45.458886 | orchestrator | Thursday 05 March 2026 00:47:23 +0000 (0:00:01.830) 0:01:42.146 ******** 2026-03-05 00:47:45.458892 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:47:45.458897 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:47:45.458903 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:47:45.458908 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:47:45.458914 | orchestrator | ok: [testbed-manager] 2026-03-05 00:47:45.458920 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:47:45.458925 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:47:45.458931 | orchestrator | 2026-03-05 00:47:45.458937 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-05 00:47:45.458943 | orchestrator | Thursday 05 March 2026 00:47:25 +0000 (0:00:02.299) 0:01:44.445 ******** 2026-03-05 00:47:45.458949 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-05 00:47:45.458956 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:47:45.458963 | orchestrator | 2026-03-05 00:47:45.458973 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-05 00:47:45.458979 | orchestrator | Thursday 05 March 2026 00:47:27 +0000 (0:00:02.222) 0:01:46.667 ******** 2026-03-05 00:47:45.458985 | orchestrator | changed: [testbed-manager] 2026-03-05 00:47:45.458990 | orchestrator | 2026-03-05 00:47:45.458996 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-05 00:47:45.459002 | orchestrator | Thursday 05 March 2026 00:47:31 +0000 (0:00:03.537) 0:01:50.205 ******** 2026-03-05 00:47:45.459008 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:47:45.459013 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:47:45.459019 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:47:45.459025 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:47:45.459031 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:47:45.459036 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:47:45.459042 | orchestrator | changed: [testbed-manager] 2026-03-05 00:47:45.459048 | orchestrator | 2026-03-05 00:47:45.459058 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:47:45.459064 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:47:45.459071 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:47:45.459077 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:47:45.459083 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:47:45.459095 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:47:45.459101 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:47:45.459107 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:47:45.459113 | orchestrator | 2026-03-05 00:47:45.459119 | orchestrator | 2026-03-05 00:47:45.459125 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:47:45.459131 | orchestrator | Thursday 05 March 2026 00:47:42 +0000 (0:00:11.420) 0:02:01.626 ******** 2026-03-05 00:47:45.459137 | orchestrator | =============================================================================== 2026-03-05 00:47:45.459142 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 55.46s 2026-03-05 00:47:45.459148 | orchestrator | osism.services.netdata : Add repository -------------------------------- 15.02s 2026-03-05 00:47:45.459154 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.42s 2026-03-05 00:47:45.459160 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 7.72s 2026-03-05 00:47:45.459165 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 4.32s 2026-03-05 00:47:45.459171 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 3.54s 2026-03-05 00:47:45.459177 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.49s 2026-03-05 00:47:45.459183 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 3.21s 2026-03-05 00:47:45.459188 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.97s 2026-03-05 00:47:45.459194 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.30s 2026-03-05 00:47:45.459200 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.22s 2026-03-05 00:47:45.459206 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.16s 2026-03-05 00:47:45.459211 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.16s 2026-03-05 00:47:45.459217 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.83s 2026-03-05 00:47:45.459223 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.70s 2026-03-05 00:47:45.459229 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.53s 2026-03-05 00:47:45.459235 | orchestrator | 2026-03-05 00:47:45 | INFO  | Task b298fc93-78be-46fb-83ec-91f86f8bf8f2 is in state SUCCESS 2026-03-05 00:47:45.459302 | orchestrator | 2026-03-05 00:47:45 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:47:45.459903 | orchestrator | 2026-03-05 00:47:45 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:47:45.460737 | orchestrator | 2026-03-05 00:47:45 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:47:45.460818 | orchestrator | 2026-03-05 00:47:45 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:47:48.509897 | orchestrator | 2026-03-05 00:47:48 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:47:48.510420 | orchestrator | 2026-03-05 00:47:48 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:47:48.512320 | orchestrator | 2026-03-05 00:47:48 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:47:48.512363 | orchestrator | 2026-03-05 00:47:48 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:47:51.549689 | orchestrator | 2026-03-05 00:47:51 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:47:51.549755 | orchestrator | 2026-03-05 00:47:51 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:47:51.549765 | orchestrator | 2026-03-05 00:47:51 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:47:51.549772 | orchestrator | 2026-03-05 00:47:51 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:47:54.596005 | orchestrator | 2026-03-05 00:47:54 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:47:54.597655 | orchestrator | 2026-03-05 00:47:54 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:47:54.598739 | orchestrator | 2026-03-05 00:47:54 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:47:54.598791 | orchestrator | 2026-03-05 00:47:54 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:47:57.649147 | orchestrator | 2026-03-05 00:47:57 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:47:57.650938 | orchestrator | 2026-03-05 00:47:57 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:47:57.652188 | orchestrator | 2026-03-05 00:47:57 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:47:57.652238 | orchestrator | 2026-03-05 00:47:57 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:48:00.701157 | orchestrator | 2026-03-05 00:48:00 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:48:00.702794 | orchestrator | 2026-03-05 00:48:00 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:48:00.705041 | orchestrator | 2026-03-05 00:48:00 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:48:00.705126 | orchestrator | 2026-03-05 00:48:00 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:48:03.756311 | orchestrator | 2026-03-05 00:48:03 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:48:03.759694 | orchestrator | 2026-03-05 00:48:03 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:48:03.762629 | orchestrator | 2026-03-05 00:48:03 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:48:03.762671 | orchestrator | 2026-03-05 00:48:03 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:48:06.808119 | orchestrator | 2026-03-05 00:48:06 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:48:06.809673 | orchestrator | 2026-03-05 00:48:06 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:48:06.811104 | orchestrator | 2026-03-05 00:48:06 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:48:06.811153 | orchestrator | 2026-03-05 00:48:06 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:48:09.868229 | orchestrator | 2026-03-05 00:48:09 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:48:09.870710 | orchestrator | 2026-03-05 00:48:09 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:48:09.872683 | orchestrator | 2026-03-05 00:48:09 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state STARTED 2026-03-05 00:48:09.873324 | orchestrator | 2026-03-05 00:48:09 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:48:12.911482 | orchestrator | 2026-03-05 00:48:12 | INFO  | Task c090c04f-0c84-49f4-bac8-b3241adbe1fb is in state STARTED 2026-03-05 00:48:12.911590 | orchestrator | 2026-03-05 00:48:12 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:48:12.911914 | orchestrator | 2026-03-05 00:48:12 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:48:12.912760 | orchestrator | 2026-03-05 00:48:12 | INFO  | Task 7b9dc605-3371-4442-bb74-294a1c91a0c3 is in state STARTED 2026-03-05 00:48:12.914976 | orchestrator | 2026-03-05 00:48:12 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:48:12.918785 | orchestrator | 2026-03-05 00:48:12 | INFO  | Task 3450c4c7-3c17-4f2e-b3e5-50921909115f is in state SUCCESS 2026-03-05 00:48:12.922123 | orchestrator | 2026-03-05 00:48:12.922181 | orchestrator | 2026-03-05 00:48:12.922204 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-05 00:48:12.922226 | orchestrator | 2026-03-05 00:48:12.922247 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-05 00:48:12.922269 | orchestrator | Thursday 05 March 2026 00:45:34 +0000 (0:00:00.325) 0:00:00.325 ******** 2026-03-05 00:48:12.922292 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:48:12.922344 | orchestrator | 2026-03-05 00:48:12.922357 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-05 00:48:12.922368 | orchestrator | Thursday 05 March 2026 00:45:36 +0000 (0:00:01.445) 0:00:01.770 ******** 2026-03-05 00:48:12.922379 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-05 00:48:12.922390 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-05 00:48:12.922401 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-05 00:48:12.922413 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-05 00:48:12.922424 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-05 00:48:12.922435 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-05 00:48:12.922446 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-05 00:48:12.922458 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-05 00:48:12.922469 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-05 00:48:12.922480 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-05 00:48:12.922491 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-05 00:48:12.922502 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-05 00:48:12.922513 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-05 00:48:12.922524 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-05 00:48:12.922535 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-05 00:48:12.922568 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-05 00:48:12.922580 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-05 00:48:12.922591 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-05 00:48:12.922605 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-05 00:48:12.922617 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-05 00:48:12.922629 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-05 00:48:12.922642 | orchestrator | 2026-03-05 00:48:12.922655 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-05 00:48:12.922668 | orchestrator | Thursday 05 March 2026 00:45:40 +0000 (0:00:04.527) 0:00:06.298 ******** 2026-03-05 00:48:12.922681 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:48:12.922694 | orchestrator | 2026-03-05 00:48:12.922707 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-05 00:48:12.922719 | orchestrator | Thursday 05 March 2026 00:45:42 +0000 (0:00:01.438) 0:00:07.736 ******** 2026-03-05 00:48:12.922737 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.922756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.922803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.922819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.922887 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.922921 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.922940 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.922952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.922965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.922998 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.923012 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.923023 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.923041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.923054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.923080 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.923091 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.923103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.923126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.923138 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.923149 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.923167 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.923178 | orchestrator | 2026-03-05 00:48:12.923189 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-05 00:48:12.923201 | orchestrator | Thursday 05 March 2026 00:45:46 +0000 (0:00:04.707) 0:00:12.444 ******** 2026-03-05 00:48:12.923212 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:48:12.923224 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.923236 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.923247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:48:12.923326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.923340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.923362 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:48:12.923374 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:48:12.923386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:48:12.923398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.923409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.923420 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:48:12.923431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:48:12.923443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:48:12.923455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.923479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.923497 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.923508 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:48:12.923520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.923531 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:48:12.923542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:48:12.923554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.923565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.923576 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:48:12.923587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:48:12.923610 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.923629 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.923640 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:48:12.923651 | orchestrator | 2026-03-05 00:48:12.923662 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-05 00:48:12.923673 | orchestrator | Thursday 05 March 2026 00:45:49 +0000 (0:00:02.113) 0:00:14.557 ******** 2026-03-05 00:48:12.923684 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:48:12.923696 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.923708 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.923719 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:48:12.923730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:48:12.923741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.923769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.923848 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:48:12.923872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.923884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:48:12.923896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:48:12.923908 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:48:12.923919 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.923931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.923942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.923969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.923988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.924000 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:48:12.924011 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.924022 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.924033 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:48:12.924044 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:48:12.924055 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:48:12.924066 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:48:12.924077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:48:12.924088 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.924107 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.924118 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:48:12.924129 | orchestrator | 2026-03-05 00:48:12.924140 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-05 00:48:12.924151 | orchestrator | Thursday 05 March 2026 00:45:52 +0000 (0:00:02.998) 0:00:17.555 ******** 2026-03-05 00:48:12.924162 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:48:12.924172 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:48:12.924183 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:48:12.924194 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:48:12.924205 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:48:12.924226 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:48:12.924238 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:48:12.924248 | orchestrator | 2026-03-05 00:48:12.924259 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-05 00:48:12.924286 | orchestrator | Thursday 05 March 2026 00:45:54 +0000 (0:00:02.289) 0:00:19.845 ******** 2026-03-05 00:48:12.924298 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:48:12.924309 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:48:12.924330 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:48:12.924341 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:48:12.924351 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:48:12.924362 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:48:12.924373 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:48:12.924383 | orchestrator | 2026-03-05 00:48:12.924394 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-05 00:48:12.924405 | orchestrator | Thursday 05 March 2026 00:45:56 +0000 (0:00:02.363) 0:00:22.208 ******** 2026-03-05 00:48:12.924417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.924428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.924440 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.924451 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.924470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.924482 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.924504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.924517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.924529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.924540 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.924552 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.924569 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.924580 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.924603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.924615 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.924627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.924638 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.924650 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.924668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.924679 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.924691 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.924702 | orchestrator | 2026-03-05 00:48:12.924713 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-05 00:48:12.924724 | orchestrator | Thursday 05 March 2026 00:46:08 +0000 (0:00:12.122) 0:00:34.331 ******** 2026-03-05 00:48:12.924736 | orchestrator | [WARNING]: Skipped 2026-03-05 00:48:12.924748 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-05 00:48:12.924759 | orchestrator | to this access issue: 2026-03-05 00:48:12.924770 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-05 00:48:12.924781 | orchestrator | directory 2026-03-05 00:48:12.924792 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-05 00:48:12.924803 | orchestrator | 2026-03-05 00:48:12.924814 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-05 00:48:12.924825 | orchestrator | Thursday 05 March 2026 00:46:10 +0000 (0:00:02.132) 0:00:36.463 ******** 2026-03-05 00:48:12.924862 | orchestrator | [WARNING]: Skipped 2026-03-05 00:48:12.924873 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-05 00:48:12.924895 | orchestrator | to this access issue: 2026-03-05 00:48:12.924907 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-05 00:48:12.924917 | orchestrator | directory 2026-03-05 00:48:12.924928 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-05 00:48:12.924939 | orchestrator | 2026-03-05 00:48:12.924950 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-05 00:48:12.924961 | orchestrator | Thursday 05 March 2026 00:46:12 +0000 (0:00:01.105) 0:00:37.568 ******** 2026-03-05 00:48:12.924971 | orchestrator | [WARNING]: Skipped 2026-03-05 00:48:12.924982 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-05 00:48:12.924993 | orchestrator | to this access issue: 2026-03-05 00:48:12.925004 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-05 00:48:12.925015 | orchestrator | directory 2026-03-05 00:48:12.925026 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-05 00:48:12.925036 | orchestrator | 2026-03-05 00:48:12.925047 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-05 00:48:12.925058 | orchestrator | Thursday 05 March 2026 00:46:13 +0000 (0:00:01.164) 0:00:38.733 ******** 2026-03-05 00:48:12.925069 | orchestrator | [WARNING]: Skipped 2026-03-05 00:48:12.925080 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-05 00:48:12.925091 | orchestrator | to this access issue: 2026-03-05 00:48:12.925108 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-05 00:48:12.925119 | orchestrator | directory 2026-03-05 00:48:12.925130 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-05 00:48:12.925141 | orchestrator | 2026-03-05 00:48:12.925151 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-05 00:48:12.925162 | orchestrator | Thursday 05 March 2026 00:46:14 +0000 (0:00:01.458) 0:00:40.191 ******** 2026-03-05 00:48:12.925173 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:48:12.925184 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:48:12.925194 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:48:12.925205 | orchestrator | changed: [testbed-manager] 2026-03-05 00:48:12.925216 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:48:12.925226 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:48:12.925237 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:48:12.925248 | orchestrator | 2026-03-05 00:48:12.925258 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-05 00:48:12.925269 | orchestrator | Thursday 05 March 2026 00:46:20 +0000 (0:00:05.330) 0:00:45.521 ******** 2026-03-05 00:48:12.925280 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-05 00:48:12.925291 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-05 00:48:12.925302 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-05 00:48:12.925313 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-05 00:48:12.925324 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-05 00:48:12.925335 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-05 00:48:12.925346 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-05 00:48:12.925356 | orchestrator | 2026-03-05 00:48:12.925367 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-05 00:48:12.925378 | orchestrator | Thursday 05 March 2026 00:46:23 +0000 (0:00:03.658) 0:00:49.180 ******** 2026-03-05 00:48:12.925389 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:48:12.925399 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:48:12.925410 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:48:12.925421 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:48:12.925431 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:48:12.925441 | orchestrator | changed: [testbed-manager] 2026-03-05 00:48:12.925452 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:48:12.925462 | orchestrator | 2026-03-05 00:48:12.925473 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-05 00:48:12.925484 | orchestrator | Thursday 05 March 2026 00:46:27 +0000 (0:00:04.307) 0:00:53.487 ******** 2026-03-05 00:48:12.925495 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.925517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.925536 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.925547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.925559 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.925570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.925582 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.925593 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.925605 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.925625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.925643 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.925655 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.925666 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.925677 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.925689 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.925701 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.925712 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.925736 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.925748 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:48:12.925760 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.925779 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.925791 | orchestrator | 2026-03-05 00:48:12.925802 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-05 00:48:12.925813 | orchestrator | Thursday 05 March 2026 00:46:32 +0000 (0:00:04.837) 0:00:58.325 ******** 2026-03-05 00:48:12.925824 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-05 00:48:12.925857 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-05 00:48:12.925868 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-05 00:48:12.925879 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-05 00:48:12.925890 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-05 00:48:12.925900 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-05 00:48:12.925911 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-05 00:48:12.925922 | orchestrator | 2026-03-05 00:48:12.925933 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-05 00:48:12.925944 | orchestrator | Thursday 05 March 2026 00:46:36 +0000 (0:00:03.333) 0:01:01.659 ******** 2026-03-05 00:48:12.925955 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-05 00:48:12.925966 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-05 00:48:12.925976 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-05 00:48:12.925987 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-05 00:48:12.926004 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-05 00:48:12.926015 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-05 00:48:12.926078 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-05 00:48:12.926090 | orchestrator | 2026-03-05 00:48:12.926101 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-05 00:48:12.926112 | orchestrator | Thursday 05 March 2026 00:46:39 +0000 (0:00:02.907) 0:01:04.566 ******** 2026-03-05 00:48:12.926123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.926380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.926474 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.926490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.926502 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.926513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.926525 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.926558 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.926606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.926619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.926630 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:48:12.926642 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.926653 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.926673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.926685 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.926697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.926719 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.926731 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.926743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.926754 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.926765 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:48:12.926777 | orchestrator | 2026-03-05 00:48:12.926790 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-05 00:48:12.926809 | orchestrator | Thursday 05 March 2026 00:46:44 +0000 (0:00:05.019) 0:01:09.586 ******** 2026-03-05 00:48:12.926820 | orchestrator | changed: [testbed-manager] 2026-03-05 00:48:12.926857 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:48:12.926869 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:48:12.926879 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:48:12.926890 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:48:12.926901 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:48:12.926911 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:48:12.926922 | orchestrator | 2026-03-05 00:48:12.926933 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-05 00:48:12.926943 | orchestrator | Thursday 05 March 2026 00:46:46 +0000 (0:00:02.256) 0:01:11.842 ******** 2026-03-05 00:48:12.926954 | orchestrator | changed: [testbed-manager] 2026-03-05 00:48:12.926965 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:48:12.926975 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:48:12.926986 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:48:12.926997 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:48:12.927007 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:48:12.927017 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:48:12.927028 | orchestrator | 2026-03-05 00:48:12.927039 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-05 00:48:12.927050 | orchestrator | Thursday 05 March 2026 00:46:47 +0000 (0:00:01.650) 0:01:13.492 ******** 2026-03-05 00:48:12.927060 | orchestrator | 2026-03-05 00:48:12.927071 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-05 00:48:12.927082 | orchestrator | Thursday 05 March 2026 00:46:48 +0000 (0:00:00.083) 0:01:13.576 ******** 2026-03-05 00:48:12.927092 | orchestrator | 2026-03-05 00:48:12.927103 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-05 00:48:12.927114 | orchestrator | Thursday 05 March 2026 00:46:48 +0000 (0:00:00.079) 0:01:13.655 ******** 2026-03-05 00:48:12.927124 | orchestrator | 2026-03-05 00:48:12.927135 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-05 00:48:12.927146 | orchestrator | Thursday 05 March 2026 00:46:48 +0000 (0:00:00.346) 0:01:14.001 ******** 2026-03-05 00:48:12.927157 | orchestrator | 2026-03-05 00:48:12.927168 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-05 00:48:12.927179 | orchestrator | Thursday 05 March 2026 00:46:48 +0000 (0:00:00.073) 0:01:14.075 ******** 2026-03-05 00:48:12.927190 | orchestrator | 2026-03-05 00:48:12.927200 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-05 00:48:12.927211 | orchestrator | Thursday 05 March 2026 00:46:48 +0000 (0:00:00.074) 0:01:14.150 ******** 2026-03-05 00:48:12.927222 | orchestrator | 2026-03-05 00:48:12.927233 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-05 00:48:12.927243 | orchestrator | Thursday 05 March 2026 00:46:48 +0000 (0:00:00.078) 0:01:14.228 ******** 2026-03-05 00:48:12.927254 | orchestrator | 2026-03-05 00:48:12.927265 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-05 00:48:12.927286 | orchestrator | Thursday 05 March 2026 00:46:48 +0000 (0:00:00.118) 0:01:14.347 ******** 2026-03-05 00:48:12.927298 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:48:12.927308 | orchestrator | changed: [testbed-manager] 2026-03-05 00:48:12.927319 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:48:12.927330 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:48:12.927340 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:48:12.927351 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:48:12.927361 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:48:12.927372 | orchestrator | 2026-03-05 00:48:12.927383 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-05 00:48:12.927394 | orchestrator | Thursday 05 March 2026 00:47:25 +0000 (0:00:36.591) 0:01:50.938 ******** 2026-03-05 00:48:12.927404 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:48:12.927415 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:48:12.927432 | orchestrator | changed: [testbed-manager] 2026-03-05 00:48:12.927443 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:48:12.927453 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:48:12.927464 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:48:12.927474 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:48:12.927485 | orchestrator | 2026-03-05 00:48:12.927496 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-05 00:48:12.927506 | orchestrator | Thursday 05 March 2026 00:47:58 +0000 (0:00:32.720) 0:02:23.658 ******** 2026-03-05 00:48:12.927517 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:48:12.927529 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:48:12.927540 | orchestrator | ok: [testbed-manager] 2026-03-05 00:48:12.927550 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:48:12.927561 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:48:12.927571 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:48:12.927582 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:48:12.927592 | orchestrator | 2026-03-05 00:48:12.927604 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-05 00:48:12.927614 | orchestrator | Thursday 05 March 2026 00:48:00 +0000 (0:00:02.768) 0:02:26.427 ******** 2026-03-05 00:48:12.927625 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:48:12.927636 | orchestrator | changed: [testbed-manager] 2026-03-05 00:48:12.927646 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:48:12.927657 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:48:12.927668 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:48:12.927678 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:48:12.927689 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:48:12.927699 | orchestrator | 2026-03-05 00:48:12.927710 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:48:12.927722 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-05 00:48:12.927733 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-05 00:48:12.927744 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-05 00:48:12.927755 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-05 00:48:12.927766 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-05 00:48:12.927777 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-05 00:48:12.927788 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-05 00:48:12.927798 | orchestrator | 2026-03-05 00:48:12.927810 | orchestrator | 2026-03-05 00:48:12.927821 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:48:12.927874 | orchestrator | Thursday 05 March 2026 00:48:10 +0000 (0:00:09.202) 0:02:35.629 ******** 2026-03-05 00:48:12.927886 | orchestrator | =============================================================================== 2026-03-05 00:48:12.927896 | orchestrator | common : Restart fluentd container ------------------------------------- 36.59s 2026-03-05 00:48:12.927907 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 32.72s 2026-03-05 00:48:12.927918 | orchestrator | common : Copying over config.json files for services ------------------- 12.12s 2026-03-05 00:48:12.927928 | orchestrator | common : Restart cron container ----------------------------------------- 9.20s 2026-03-05 00:48:12.927939 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.33s 2026-03-05 00:48:12.927956 | orchestrator | common : Check common containers ---------------------------------------- 5.02s 2026-03-05 00:48:12.927967 | orchestrator | common : Ensuring config directories have correct owner and permission --- 4.84s 2026-03-05 00:48:12.927978 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.71s 2026-03-05 00:48:12.927988 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.53s 2026-03-05 00:48:12.928013 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.31s 2026-03-05 00:48:12.928035 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.66s 2026-03-05 00:48:12.928046 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.33s 2026-03-05 00:48:12.928057 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.00s 2026-03-05 00:48:12.928073 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.91s 2026-03-05 00:48:12.928090 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.77s 2026-03-05 00:48:12.928102 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 2.36s 2026-03-05 00:48:12.928113 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 2.29s 2026-03-05 00:48:12.928123 | orchestrator | common : Creating log volume -------------------------------------------- 2.26s 2026-03-05 00:48:12.928134 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.13s 2026-03-05 00:48:12.928145 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.11s 2026-03-05 00:48:12.928156 | orchestrator | 2026-03-05 00:48:12 | INFO  | Task 2d5596eb-9801-4676-bc6c-9e2c57f4c1fe is in state STARTED 2026-03-05 00:48:12.928167 | orchestrator | 2026-03-05 00:48:12 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:48:15.957183 | orchestrator | 2026-03-05 00:48:15 | INFO  | Task c090c04f-0c84-49f4-bac8-b3241adbe1fb is in state STARTED 2026-03-05 00:48:15.958271 | orchestrator | 2026-03-05 00:48:15 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:48:15.958320 | orchestrator | 2026-03-05 00:48:15 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:48:15.958983 | orchestrator | 2026-03-05 00:48:15 | INFO  | Task 7b9dc605-3371-4442-bb74-294a1c91a0c3 is in state STARTED 2026-03-05 00:48:15.962080 | orchestrator | 2026-03-05 00:48:15 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:48:15.963177 | orchestrator | 2026-03-05 00:48:15 | INFO  | Task 2d5596eb-9801-4676-bc6c-9e2c57f4c1fe is in state STARTED 2026-03-05 00:48:15.963196 | orchestrator | 2026-03-05 00:48:15 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:48:19.010353 | orchestrator | 2026-03-05 00:48:19 | INFO  | Task c090c04f-0c84-49f4-bac8-b3241adbe1fb is in state STARTED 2026-03-05 00:48:19.010745 | orchestrator | 2026-03-05 00:48:19 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:48:19.011936 | orchestrator | 2026-03-05 00:48:19 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:48:19.013784 | orchestrator | 2026-03-05 00:48:19 | INFO  | Task 7b9dc605-3371-4442-bb74-294a1c91a0c3 is in state STARTED 2026-03-05 00:48:19.014961 | orchestrator | 2026-03-05 00:48:19 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:48:19.017490 | orchestrator | 2026-03-05 00:48:19 | INFO  | Task 2d5596eb-9801-4676-bc6c-9e2c57f4c1fe is in state STARTED 2026-03-05 00:48:19.017934 | orchestrator | 2026-03-05 00:48:19 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:48:22.090557 | orchestrator | 2026-03-05 00:48:22 | INFO  | Task c090c04f-0c84-49f4-bac8-b3241adbe1fb is in state STARTED 2026-03-05 00:48:22.092737 | orchestrator | 2026-03-05 00:48:22 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:48:22.096076 | orchestrator | 2026-03-05 00:48:22 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:48:22.096952 | orchestrator | 2026-03-05 00:48:22 | INFO  | Task 7b9dc605-3371-4442-bb74-294a1c91a0c3 is in state STARTED 2026-03-05 00:48:22.098620 | orchestrator | 2026-03-05 00:48:22 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:48:22.099379 | orchestrator | 2026-03-05 00:48:22 | INFO  | Task 2d5596eb-9801-4676-bc6c-9e2c57f4c1fe is in state STARTED 2026-03-05 00:48:22.100346 | orchestrator | 2026-03-05 00:48:22 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:48:25.152004 | orchestrator | 2026-03-05 00:48:25 | INFO  | Task c090c04f-0c84-49f4-bac8-b3241adbe1fb is in state STARTED 2026-03-05 00:48:25.152416 | orchestrator | 2026-03-05 00:48:25 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:48:25.153318 | orchestrator | 2026-03-05 00:48:25 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:48:25.161741 | orchestrator | 2026-03-05 00:48:25 | INFO  | Task 7b9dc605-3371-4442-bb74-294a1c91a0c3 is in state STARTED 2026-03-05 00:48:25.161953 | orchestrator | 2026-03-05 00:48:25 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:48:25.162989 | orchestrator | 2026-03-05 00:48:25 | INFO  | Task 2d5596eb-9801-4676-bc6c-9e2c57f4c1fe is in state STARTED 2026-03-05 00:48:25.163640 | orchestrator | 2026-03-05 00:48:25 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:48:28.203324 | orchestrator | 2026-03-05 00:48:28 | INFO  | Task c090c04f-0c84-49f4-bac8-b3241adbe1fb is in state STARTED 2026-03-05 00:48:28.210269 | orchestrator | 2026-03-05 00:48:28 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:48:28.210353 | orchestrator | 2026-03-05 00:48:28 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:48:28.212275 | orchestrator | 2026-03-05 00:48:28 | INFO  | Task 7b9dc605-3371-4442-bb74-294a1c91a0c3 is in state STARTED 2026-03-05 00:48:28.213413 | orchestrator | 2026-03-05 00:48:28 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:48:28.214553 | orchestrator | 2026-03-05 00:48:28 | INFO  | Task 2d5596eb-9801-4676-bc6c-9e2c57f4c1fe is in state STARTED 2026-03-05 00:48:28.214593 | orchestrator | 2026-03-05 00:48:28 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:48:31.263738 | orchestrator | 2026-03-05 00:48:31 | INFO  | Task c090c04f-0c84-49f4-bac8-b3241adbe1fb is in state SUCCESS 2026-03-05 00:48:31.264238 | orchestrator | 2026-03-05 00:48:31 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:48:31.264961 | orchestrator | 2026-03-05 00:48:31 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:48:31.266095 | orchestrator | 2026-03-05 00:48:31 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:48:31.267037 | orchestrator | 2026-03-05 00:48:31 | INFO  | Task 7b9dc605-3371-4442-bb74-294a1c91a0c3 is in state STARTED 2026-03-05 00:48:31.267510 | orchestrator | 2026-03-05 00:48:31 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:48:31.274105 | orchestrator | 2026-03-05 00:48:31 | INFO  | Task 2d5596eb-9801-4676-bc6c-9e2c57f4c1fe is in state STARTED 2026-03-05 00:48:31.274201 | orchestrator | 2026-03-05 00:48:31 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:48:34.332982 | orchestrator | 2026-03-05 00:48:34 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:48:34.334840 | orchestrator | 2026-03-05 00:48:34 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:48:34.336666 | orchestrator | 2026-03-05 00:48:34 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:48:34.338361 | orchestrator | 2026-03-05 00:48:34 | INFO  | Task 7b9dc605-3371-4442-bb74-294a1c91a0c3 is in state STARTED 2026-03-05 00:48:34.339289 | orchestrator | 2026-03-05 00:48:34 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:48:34.341373 | orchestrator | 2026-03-05 00:48:34 | INFO  | Task 2d5596eb-9801-4676-bc6c-9e2c57f4c1fe is in state STARTED 2026-03-05 00:48:34.341430 | orchestrator | 2026-03-05 00:48:34 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:48:37.381095 | orchestrator | 2026-03-05 00:48:37 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:48:37.381485 | orchestrator | 2026-03-05 00:48:37 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:48:37.383243 | orchestrator | 2026-03-05 00:48:37 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:48:37.383892 | orchestrator | 2026-03-05 00:48:37 | INFO  | Task 7b9dc605-3371-4442-bb74-294a1c91a0c3 is in state STARTED 2026-03-05 00:48:37.389382 | orchestrator | 2026-03-05 00:48:37 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:48:37.393063 | orchestrator | 2026-03-05 00:48:37 | INFO  | Task 2d5596eb-9801-4676-bc6c-9e2c57f4c1fe is in state STARTED 2026-03-05 00:48:37.393121 | orchestrator | 2026-03-05 00:48:37 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:48:40.437321 | orchestrator | 2026-03-05 00:48:40 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:48:40.437394 | orchestrator | 2026-03-05 00:48:40 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:48:40.438460 | orchestrator | 2026-03-05 00:48:40 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:48:40.438503 | orchestrator | 2026-03-05 00:48:40 | INFO  | Task 7b9dc605-3371-4442-bb74-294a1c91a0c3 is in state STARTED 2026-03-05 00:48:40.440486 | orchestrator | 2026-03-05 00:48:40 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:48:40.440530 | orchestrator | 2026-03-05 00:48:40 | INFO  | Task 2d5596eb-9801-4676-bc6c-9e2c57f4c1fe is in state STARTED 2026-03-05 00:48:40.440540 | orchestrator | 2026-03-05 00:48:40 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:48:43.787221 | orchestrator | 2026-03-05 00:48:43 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:48:43.789061 | orchestrator | 2026-03-05 00:48:43 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:48:43.791160 | orchestrator | 2026-03-05 00:48:43 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:48:43.792970 | orchestrator | 2026-03-05 00:48:43 | INFO  | Task 7b9dc605-3371-4442-bb74-294a1c91a0c3 is in state STARTED 2026-03-05 00:48:43.793784 | orchestrator | 2026-03-05 00:48:43 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:48:43.795775 | orchestrator | 2026-03-05 00:48:43 | INFO  | Task 2d5596eb-9801-4676-bc6c-9e2c57f4c1fe is in state STARTED 2026-03-05 00:48:43.795841 | orchestrator | 2026-03-05 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:48:46.826560 | orchestrator | 2026-03-05 00:48:46 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:48:46.828157 | orchestrator | 2026-03-05 00:48:46 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:48:46.828750 | orchestrator | 2026-03-05 00:48:46 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:48:46.829323 | orchestrator | 2026-03-05 00:48:46 | INFO  | Task 7b9dc605-3371-4442-bb74-294a1c91a0c3 is in state STARTED 2026-03-05 00:48:46.830524 | orchestrator | 2026-03-05 00:48:46 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:48:46.831348 | orchestrator | 2026-03-05 00:48:46 | INFO  | Task 2d5596eb-9801-4676-bc6c-9e2c57f4c1fe is in state STARTED 2026-03-05 00:48:46.831520 | orchestrator | 2026-03-05 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:48:49.931091 | orchestrator | 2026-03-05 00:48:49 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:48:49.931164 | orchestrator | 2026-03-05 00:48:49 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:48:49.931170 | orchestrator | 2026-03-05 00:48:49 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:48:49.931175 | orchestrator | 2026-03-05 00:48:49 | INFO  | Task 7b9dc605-3371-4442-bb74-294a1c91a0c3 is in state STARTED 2026-03-05 00:48:49.931179 | orchestrator | 2026-03-05 00:48:49 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:48:49.931184 | orchestrator | 2026-03-05 00:48:49 | INFO  | Task 2d5596eb-9801-4676-bc6c-9e2c57f4c1fe is in state STARTED 2026-03-05 00:48:49.931189 | orchestrator | 2026-03-05 00:48:49 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:48:52.954484 | orchestrator | 2026-03-05 00:48:52 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:48:52.956570 | orchestrator | 2026-03-05 00:48:52 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:48:52.957692 | orchestrator | 2026-03-05 00:48:52 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:48:52.958509 | orchestrator | 2026-03-05 00:48:52 | INFO  | Task 7b9dc605-3371-4442-bb74-294a1c91a0c3 is in state STARTED 2026-03-05 00:48:52.961365 | orchestrator | 2026-03-05 00:48:52 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:48:52.962401 | orchestrator | 2026-03-05 00:48:52 | INFO  | Task 2d5596eb-9801-4676-bc6c-9e2c57f4c1fe is in state SUCCESS 2026-03-05 00:48:52.964261 | orchestrator | 2026-03-05 00:48:52.964299 | orchestrator | 2026-03-05 00:48:52.964306 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 00:48:52.964314 | orchestrator | 2026-03-05 00:48:52.964320 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 00:48:52.964327 | orchestrator | Thursday 05 March 2026 00:48:16 +0000 (0:00:00.713) 0:00:00.713 ******** 2026-03-05 00:48:52.964333 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:48:52.964341 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:48:52.964347 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:48:52.964353 | orchestrator | 2026-03-05 00:48:52.964358 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 00:48:52.964364 | orchestrator | Thursday 05 March 2026 00:48:17 +0000 (0:00:00.563) 0:00:01.277 ******** 2026-03-05 00:48:52.964371 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-05 00:48:52.964377 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-05 00:48:52.964383 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-05 00:48:52.964403 | orchestrator | 2026-03-05 00:48:52.964410 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-05 00:48:52.964415 | orchestrator | 2026-03-05 00:48:52.964421 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-05 00:48:52.964427 | orchestrator | Thursday 05 March 2026 00:48:17 +0000 (0:00:00.740) 0:00:02.017 ******** 2026-03-05 00:48:52.964437 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:48:52.964444 | orchestrator | 2026-03-05 00:48:52.964450 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-05 00:48:52.964456 | orchestrator | Thursday 05 March 2026 00:48:18 +0000 (0:00:00.845) 0:00:02.862 ******** 2026-03-05 00:48:52.964462 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-05 00:48:52.964468 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-05 00:48:52.964473 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-05 00:48:52.964479 | orchestrator | 2026-03-05 00:48:52.964484 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-05 00:48:52.964490 | orchestrator | Thursday 05 March 2026 00:48:19 +0000 (0:00:01.104) 0:00:03.966 ******** 2026-03-05 00:48:52.964496 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-05 00:48:52.964502 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-05 00:48:52.964507 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-05 00:48:52.964513 | orchestrator | 2026-03-05 00:48:52.964518 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-05 00:48:52.964524 | orchestrator | Thursday 05 March 2026 00:48:22 +0000 (0:00:02.802) 0:00:06.768 ******** 2026-03-05 00:48:52.964530 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:48:52.964536 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:48:52.964541 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:48:52.964547 | orchestrator | 2026-03-05 00:48:52.964552 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-05 00:48:52.964558 | orchestrator | Thursday 05 March 2026 00:48:24 +0000 (0:00:02.149) 0:00:08.918 ******** 2026-03-05 00:48:52.964564 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:48:52.964571 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:48:52.964580 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:48:52.964589 | orchestrator | 2026-03-05 00:48:52.964597 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:48:52.964603 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:48:52.964611 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:48:52.964617 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:48:52.964622 | orchestrator | 2026-03-05 00:48:52.964628 | orchestrator | 2026-03-05 00:48:52.964645 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:48:52.964651 | orchestrator | Thursday 05 March 2026 00:48:28 +0000 (0:00:03.341) 0:00:12.259 ******** 2026-03-05 00:48:52.964657 | orchestrator | =============================================================================== 2026-03-05 00:48:52.964663 | orchestrator | memcached : Restart memcached container --------------------------------- 3.34s 2026-03-05 00:48:52.964668 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.80s 2026-03-05 00:48:52.964674 | orchestrator | memcached : Check memcached container ----------------------------------- 2.15s 2026-03-05 00:48:52.964680 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.10s 2026-03-05 00:48:52.964686 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.85s 2026-03-05 00:48:52.964691 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2026-03-05 00:48:52.964701 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.56s 2026-03-05 00:48:52.964707 | orchestrator | 2026-03-05 00:48:52.964713 | orchestrator | 2026-03-05 00:48:52.964718 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 00:48:52.964724 | orchestrator | 2026-03-05 00:48:52.964730 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 00:48:52.964736 | orchestrator | Thursday 05 March 2026 00:48:15 +0000 (0:00:00.450) 0:00:00.450 ******** 2026-03-05 00:48:52.964742 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:48:52.964748 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:48:52.964754 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:48:52.964759 | orchestrator | 2026-03-05 00:48:52.964765 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 00:48:52.964779 | orchestrator | Thursday 05 March 2026 00:48:16 +0000 (0:00:00.687) 0:00:01.137 ******** 2026-03-05 00:48:52.964785 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-05 00:48:52.964823 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-05 00:48:52.964830 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-05 00:48:52.964837 | orchestrator | 2026-03-05 00:48:52.964843 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-05 00:48:52.964848 | orchestrator | 2026-03-05 00:48:52.964856 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-05 00:48:52.964862 | orchestrator | Thursday 05 March 2026 00:48:17 +0000 (0:00:00.753) 0:00:01.891 ******** 2026-03-05 00:48:52.964869 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:48:52.964875 | orchestrator | 2026-03-05 00:48:52.964882 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-05 00:48:52.964888 | orchestrator | Thursday 05 March 2026 00:48:17 +0000 (0:00:00.759) 0:00:02.650 ******** 2026-03-05 00:48:52.964897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-05 00:48:52.964908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-05 00:48:52.964915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-05 00:48:52.964922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-05 00:48:52.964940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-05 00:48:52.964952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-05 00:48:52.964959 | orchestrator | 2026-03-05 00:48:52.964966 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-05 00:48:52.964973 | orchestrator | Thursday 05 March 2026 00:48:19 +0000 (0:00:01.482) 0:00:04.133 ******** 2026-03-05 00:48:52.964983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-05 00:48:52.964990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-05 00:48:52.964997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-05 00:48:52.965004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-05 00:48:52.965015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-05 00:48:52.965025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-05 00:48:52.965033 | orchestrator | 2026-03-05 00:48:52.965039 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-05 00:48:52.965046 | orchestrator | Thursday 05 March 2026 00:48:23 +0000 (0:00:04.230) 0:00:08.364 ******** 2026-03-05 00:48:52.965053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-05 00:48:52.965062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-05 00:48:52.965069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-05 00:48:52.965076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-05 00:48:52.965087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-05 00:48:52.965094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-05 00:48:52.965100 | orchestrator | 2026-03-05 00:48:52.965110 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-05 00:48:52.965117 | orchestrator | Thursday 05 March 2026 00:48:26 +0000 (0:00:03.188) 0:00:11.552 ******** 2026-03-05 00:48:52.965123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-05 00:48:52.965133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-05 00:48:52.965140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-05 00:48:52.965147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-05 00:48:52.965157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-05 00:48:52.965164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-05 00:48:52.965171 | orchestrator | 2026-03-05 00:48:52.965177 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-05 00:48:52.965183 | orchestrator | Thursday 05 March 2026 00:48:29 +0000 (0:00:02.313) 0:00:13.865 ******** 2026-03-05 00:48:52.965191 | orchestrator | 2026-03-05 00:48:52.965197 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-05 00:48:52.965207 | orchestrator | Thursday 05 March 2026 00:48:29 +0000 (0:00:00.205) 0:00:14.071 ******** 2026-03-05 00:48:52.965214 | orchestrator | 2026-03-05 00:48:52.965220 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-05 00:48:52.965227 | orchestrator | Thursday 05 March 2026 00:48:29 +0000 (0:00:00.219) 0:00:14.291 ******** 2026-03-05 00:48:52.965237 | orchestrator | 2026-03-05 00:48:52.965243 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-05 00:48:52.965249 | orchestrator | Thursday 05 March 2026 00:48:29 +0000 (0:00:00.167) 0:00:14.458 ******** 2026-03-05 00:48:52.965255 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:48:52.965260 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:48:52.965266 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:48:52.965272 | orchestrator | 2026-03-05 00:48:52.965277 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-05 00:48:52.965283 | orchestrator | Thursday 05 March 2026 00:48:41 +0000 (0:00:11.542) 0:00:26.001 ******** 2026-03-05 00:48:52.965289 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:48:52.965294 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:48:52.965300 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:48:52.965306 | orchestrator | 2026-03-05 00:48:52.965312 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:48:52.965321 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:48:52.965327 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:48:52.965336 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:48:52.965342 | orchestrator | 2026-03-05 00:48:52.965348 | orchestrator | 2026-03-05 00:48:52.965353 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:48:52.965359 | orchestrator | Thursday 05 March 2026 00:48:51 +0000 (0:00:09.923) 0:00:35.924 ******** 2026-03-05 00:48:52.965365 | orchestrator | =============================================================================== 2026-03-05 00:48:52.965370 | orchestrator | redis : Restart redis container ---------------------------------------- 11.54s 2026-03-05 00:48:52.965376 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.92s 2026-03-05 00:48:52.965382 | orchestrator | redis : Copying over default config.json files -------------------------- 4.23s 2026-03-05 00:48:52.965388 | orchestrator | redis : Copying over redis config files --------------------------------- 3.19s 2026-03-05 00:48:52.965393 | orchestrator | redis : Check redis containers ------------------------------------------ 2.31s 2026-03-05 00:48:52.965399 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.48s 2026-03-05 00:48:52.965405 | orchestrator | redis : include_tasks --------------------------------------------------- 0.76s 2026-03-05 00:48:52.965410 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.75s 2026-03-05 00:48:52.965416 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.69s 2026-03-05 00:48:52.965422 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.59s 2026-03-05 00:48:52.965427 | orchestrator | 2026-03-05 00:48:52 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:48:56.005167 | orchestrator | 2026-03-05 00:48:56 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:48:56.008685 | orchestrator | 2026-03-05 00:48:56 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:48:56.013634 | orchestrator | 2026-03-05 00:48:56 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:48:56.014552 | orchestrator | 2026-03-05 00:48:56 | INFO  | Task 7b9dc605-3371-4442-bb74-294a1c91a0c3 is in state STARTED 2026-03-05 00:48:56.017658 | orchestrator | 2026-03-05 00:48:56 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:48:56.017712 | orchestrator | 2026-03-05 00:48:56 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:48:59.072469 | orchestrator | 2026-03-05 00:48:59 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:48:59.076171 | orchestrator | 2026-03-05 00:48:59 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:48:59.078861 | orchestrator | 2026-03-05 00:48:59 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:48:59.082489 | orchestrator | 2026-03-05 00:48:59 | INFO  | Task 7b9dc605-3371-4442-bb74-294a1c91a0c3 is in state STARTED 2026-03-05 00:48:59.084001 | orchestrator | 2026-03-05 00:48:59 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:48:59.084060 | orchestrator | 2026-03-05 00:48:59 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:02.377376 | orchestrator | 2026-03-05 00:49:02 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:49:02.377466 | orchestrator | 2026-03-05 00:49:02 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:49:02.377480 | orchestrator | 2026-03-05 00:49:02 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:49:02.377489 | orchestrator | 2026-03-05 00:49:02 | INFO  | Task 7b9dc605-3371-4442-bb74-294a1c91a0c3 is in state STARTED 2026-03-05 00:49:02.377529 | orchestrator | 2026-03-05 00:49:02 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:49:02.377538 | orchestrator | 2026-03-05 00:49:02 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:05.544259 | orchestrator | 2026-03-05 00:49:05 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:49:05.544355 | orchestrator | 2026-03-05 00:49:05 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:49:05.544366 | orchestrator | 2026-03-05 00:49:05 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:49:05.544374 | orchestrator | 2026-03-05 00:49:05 | INFO  | Task 7b9dc605-3371-4442-bb74-294a1c91a0c3 is in state STARTED 2026-03-05 00:49:05.544399 | orchestrator | 2026-03-05 00:49:05 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:49:05.544407 | orchestrator | 2026-03-05 00:49:05 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:08.579065 | orchestrator | 2026-03-05 00:49:08 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:49:08.579758 | orchestrator | 2026-03-05 00:49:08 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:49:08.580846 | orchestrator | 2026-03-05 00:49:08 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:49:08.583305 | orchestrator | 2026-03-05 00:49:08 | INFO  | Task 7b9dc605-3371-4442-bb74-294a1c91a0c3 is in state STARTED 2026-03-05 00:49:08.584681 | orchestrator | 2026-03-05 00:49:08 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:49:08.584874 | orchestrator | 2026-03-05 00:49:08 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:11.634901 | orchestrator | 2026-03-05 00:49:11 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:49:11.636801 | orchestrator | 2026-03-05 00:49:11 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:49:11.638525 | orchestrator | 2026-03-05 00:49:11 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:49:11.639840 | orchestrator | 2026-03-05 00:49:11 | INFO  | Task 7b9dc605-3371-4442-bb74-294a1c91a0c3 is in state STARTED 2026-03-05 00:49:11.641485 | orchestrator | 2026-03-05 00:49:11 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:49:11.641522 | orchestrator | 2026-03-05 00:49:11 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:14.689933 | orchestrator | 2026-03-05 00:49:14 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:49:14.691005 | orchestrator | 2026-03-05 00:49:14 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:49:14.693004 | orchestrator | 2026-03-05 00:49:14 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:49:14.693046 | orchestrator | 2026-03-05 00:49:14 | INFO  | Task 7b9dc605-3371-4442-bb74-294a1c91a0c3 is in state STARTED 2026-03-05 00:49:14.694432 | orchestrator | 2026-03-05 00:49:14 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:49:14.694472 | orchestrator | 2026-03-05 00:49:14 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:17.760257 | orchestrator | 2026-03-05 00:49:17 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:49:17.761349 | orchestrator | 2026-03-05 00:49:17 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:49:17.762686 | orchestrator | 2026-03-05 00:49:17 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:49:17.763716 | orchestrator | 2026-03-05 00:49:17 | INFO  | Task 7b9dc605-3371-4442-bb74-294a1c91a0c3 is in state STARTED 2026-03-05 00:49:17.765073 | orchestrator | 2026-03-05 00:49:17 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:49:17.765133 | orchestrator | 2026-03-05 00:49:17 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:20.804481 | orchestrator | 2026-03-05 00:49:20 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:49:20.805914 | orchestrator | 2026-03-05 00:49:20 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:49:20.806801 | orchestrator | 2026-03-05 00:49:20 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:49:20.808720 | orchestrator | 2026-03-05 00:49:20 | INFO  | Task 7b9dc605-3371-4442-bb74-294a1c91a0c3 is in state STARTED 2026-03-05 00:49:20.810547 | orchestrator | 2026-03-05 00:49:20 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:49:20.810573 | orchestrator | 2026-03-05 00:49:20 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:23.863690 | orchestrator | 2026-03-05 00:49:23 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:49:23.865916 | orchestrator | 2026-03-05 00:49:23 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:49:23.868205 | orchestrator | 2026-03-05 00:49:23 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:49:23.871318 | orchestrator | 2026-03-05 00:49:23 | INFO  | Task 7b9dc605-3371-4442-bb74-294a1c91a0c3 is in state STARTED 2026-03-05 00:49:23.874389 | orchestrator | 2026-03-05 00:49:23 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:49:23.874460 | orchestrator | 2026-03-05 00:49:23 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:26.923985 | orchestrator | 2026-03-05 00:49:26 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:49:26.925678 | orchestrator | 2026-03-05 00:49:26 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:49:26.926135 | orchestrator | 2026-03-05 00:49:26 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:49:26.927475 | orchestrator | 2026-03-05 00:49:26 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:49:26.928998 | orchestrator | 2026-03-05 00:49:26 | INFO  | Task 7b9dc605-3371-4442-bb74-294a1c91a0c3 is in state SUCCESS 2026-03-05 00:49:26.930929 | orchestrator | 2026-03-05 00:49:26.930987 | orchestrator | 2026-03-05 00:49:26.930995 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 00:49:26.931004 | orchestrator | 2026-03-05 00:49:26.931012 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 00:49:26.931020 | orchestrator | Thursday 05 March 2026 00:48:15 +0000 (0:00:00.355) 0:00:00.355 ******** 2026-03-05 00:49:26.931056 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:49:26.931066 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:49:26.931072 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:49:26.931079 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:49:26.931085 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:49:26.931093 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:49:26.931099 | orchestrator | 2026-03-05 00:49:26.931106 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 00:49:26.931114 | orchestrator | Thursday 05 March 2026 00:48:16 +0000 (0:00:01.120) 0:00:01.476 ******** 2026-03-05 00:49:26.931139 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-05 00:49:26.931146 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-05 00:49:26.931152 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-05 00:49:26.931159 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-05 00:49:26.931169 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-05 00:49:26.931176 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-05 00:49:26.931182 | orchestrator | 2026-03-05 00:49:26.931189 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-05 00:49:26.931196 | orchestrator | 2026-03-05 00:49:26.931202 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-05 00:49:26.931208 | orchestrator | Thursday 05 March 2026 00:48:18 +0000 (0:00:01.186) 0:00:02.662 ******** 2026-03-05 00:49:26.931216 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:49:26.931225 | orchestrator | 2026-03-05 00:49:26.931231 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-05 00:49:26.931238 | orchestrator | Thursday 05 March 2026 00:48:20 +0000 (0:00:02.455) 0:00:05.118 ******** 2026-03-05 00:49:26.931244 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-05 00:49:26.931251 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-05 00:49:26.931258 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-05 00:49:26.931264 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-05 00:49:26.931271 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-05 00:49:26.931278 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-05 00:49:26.931285 | orchestrator | 2026-03-05 00:49:26.931292 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-05 00:49:26.931299 | orchestrator | Thursday 05 March 2026 00:48:22 +0000 (0:00:02.035) 0:00:07.154 ******** 2026-03-05 00:49:26.931305 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-05 00:49:26.931312 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-05 00:49:26.931319 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-05 00:49:26.931325 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-05 00:49:26.931331 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-05 00:49:26.931338 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-05 00:49:26.931345 | orchestrator | 2026-03-05 00:49:26.931352 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-05 00:49:26.931358 | orchestrator | Thursday 05 March 2026 00:48:24 +0000 (0:00:01.998) 0:00:09.152 ******** 2026-03-05 00:49:26.931364 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-05 00:49:26.931371 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:49:26.931379 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-05 00:49:26.931386 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:49:26.931393 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-05 00:49:26.931399 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:49:26.931406 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-05 00:49:26.931412 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:49:26.931419 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-05 00:49:26.931426 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:49:26.931439 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-05 00:49:26.931446 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:49:26.931458 | orchestrator | 2026-03-05 00:49:26.931465 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-05 00:49:26.931472 | orchestrator | Thursday 05 March 2026 00:48:26 +0000 (0:00:01.975) 0:00:11.128 ******** 2026-03-05 00:49:26.931478 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:49:26.931485 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:49:26.931491 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:49:26.931498 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:49:26.931504 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:49:26.931511 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:49:26.931517 | orchestrator | 2026-03-05 00:49:26.931525 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-05 00:49:26.931533 | orchestrator | Thursday 05 March 2026 00:48:27 +0000 (0:00:01.179) 0:00:12.307 ******** 2026-03-05 00:49:26.931558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:49:26.931569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:49:26.931577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:49:26.931586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:49:26.931599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:49:26.931611 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:49:26.931649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:49:26.931658 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:49:26.931667 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:49:26.931676 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:49:26.931692 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:49:26.931707 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:49:26.931715 | orchestrator | 2026-03-05 00:49:26.931726 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-05 00:49:26.931734 | orchestrator | Thursday 05 March 2026 00:48:31 +0000 (0:00:03.424) 0:00:15.731 ******** 2026-03-05 00:49:26.931744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:49:26.931755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:49:26.931778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:49:26.931785 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:49:26.931801 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:49:26.931813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:49:26.931822 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:49:26.931829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:49:26.931840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:49:26.931854 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:49:26.931869 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:49:26.931895 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:49:26.931905 | orchestrator | 2026-03-05 00:49:26.931912 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-05 00:49:26.931919 | orchestrator | Thursday 05 March 2026 00:48:35 +0000 (0:00:04.226) 0:00:19.958 ******** 2026-03-05 00:49:26.931926 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:49:26.931933 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:49:26.931940 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:49:26.931946 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:49:26.931952 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:49:26.931958 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:49:26.931965 | orchestrator | 2026-03-05 00:49:26.931972 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-05 00:49:26.931979 | orchestrator | Thursday 05 March 2026 00:48:36 +0000 (0:00:00.822) 0:00:20.780 ******** 2026-03-05 00:49:26.931987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:49:26.931994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:49:26.932006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:49:26.932016 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:49:26.932029 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:49:26.932036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:49:26.932043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:49:26.932055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:49:26.932062 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:49:26.932077 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:49:26.932089 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:49:26.932096 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:49:26.932103 | orchestrator | 2026-03-05 00:49:26.932110 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-05 00:49:26.932116 | orchestrator | Thursday 05 March 2026 00:48:39 +0000 (0:00:02.833) 0:00:23.614 ******** 2026-03-05 00:49:26.932123 | orchestrator | 2026-03-05 00:49:26.932134 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-05 00:49:26.932141 | orchestrator | Thursday 05 March 2026 00:48:39 +0000 (0:00:00.272) 0:00:23.886 ******** 2026-03-05 00:49:26.932148 | orchestrator | 2026-03-05 00:49:26.932154 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-05 00:49:26.932160 | orchestrator | Thursday 05 March 2026 00:48:39 +0000 (0:00:00.132) 0:00:24.019 ******** 2026-03-05 00:49:26.932166 | orchestrator | 2026-03-05 00:49:26.932173 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-05 00:49:26.932179 | orchestrator | Thursday 05 March 2026 00:48:39 +0000 (0:00:00.139) 0:00:24.158 ******** 2026-03-05 00:49:26.932185 | orchestrator | 2026-03-05 00:49:26.932191 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-05 00:49:26.932198 | orchestrator | Thursday 05 March 2026 00:48:39 +0000 (0:00:00.133) 0:00:24.292 ******** 2026-03-05 00:49:26.932205 | orchestrator | 2026-03-05 00:49:26.932211 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-05 00:49:26.932218 | orchestrator | Thursday 05 March 2026 00:48:40 +0000 (0:00:00.297) 0:00:24.590 ******** 2026-03-05 00:49:26.932224 | orchestrator | 2026-03-05 00:49:26.932231 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-05 00:49:26.932238 | orchestrator | Thursday 05 March 2026 00:48:40 +0000 (0:00:00.243) 0:00:24.833 ******** 2026-03-05 00:49:26.932244 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:49:26.932251 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:49:26.932258 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:49:26.932264 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:49:26.932270 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:49:26.932277 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:49:26.932285 | orchestrator | 2026-03-05 00:49:26.932291 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-05 00:49:26.932298 | orchestrator | Thursday 05 March 2026 00:48:49 +0000 (0:00:09.155) 0:00:33.988 ******** 2026-03-05 00:49:26.932305 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:49:26.932312 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:49:26.932319 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:49:26.932326 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:49:26.932333 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:49:26.932340 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:49:26.932346 | orchestrator | 2026-03-05 00:49:26.932353 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-05 00:49:26.932359 | orchestrator | Thursday 05 March 2026 00:48:51 +0000 (0:00:01.754) 0:00:35.743 ******** 2026-03-05 00:49:26.932366 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:49:26.932372 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:49:26.932379 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:49:26.932386 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:49:26.932396 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:49:26.932403 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:49:26.932409 | orchestrator | 2026-03-05 00:49:26.932416 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-05 00:49:26.932423 | orchestrator | Thursday 05 March 2026 00:48:57 +0000 (0:00:06.238) 0:00:41.981 ******** 2026-03-05 00:49:26.932429 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-05 00:49:26.932437 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-05 00:49:26.932444 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-05 00:49:26.932451 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-05 00:49:26.932457 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-05 00:49:26.932473 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-05 00:49:26.932480 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-05 00:49:26.932486 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-05 00:49:26.932492 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-05 00:49:26.932499 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-05 00:49:26.932505 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-05 00:49:26.932512 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-05 00:49:26.932519 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-05 00:49:26.932525 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-05 00:49:26.932532 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-05 00:49:26.932538 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-05 00:49:26.932545 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-05 00:49:26.932551 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-05 00:49:26.932558 | orchestrator | 2026-03-05 00:49:26.932565 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-05 00:49:26.932571 | orchestrator | Thursday 05 March 2026 00:49:06 +0000 (0:00:08.959) 0:00:50.941 ******** 2026-03-05 00:49:26.932578 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-05 00:49:26.932619 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-05 00:49:26.932626 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:49:26.932632 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-05 00:49:26.932638 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:49:26.932645 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-05 00:49:26.932651 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:49:26.932657 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-05 00:49:26.932663 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-05 00:49:26.932670 | orchestrator | 2026-03-05 00:49:26.932677 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-05 00:49:26.932683 | orchestrator | Thursday 05 March 2026 00:49:09 +0000 (0:00:03.311) 0:00:54.252 ******** 2026-03-05 00:49:26.932690 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-05 00:49:26.932696 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:49:26.932703 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-05 00:49:26.932710 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:49:26.932716 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-05 00:49:26.932723 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:49:26.932729 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-05 00:49:26.932735 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-05 00:49:26.932741 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-05 00:49:26.932748 | orchestrator | 2026-03-05 00:49:26.932754 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-05 00:49:26.932821 | orchestrator | Thursday 05 March 2026 00:49:13 +0000 (0:00:03.403) 0:00:57.655 ******** 2026-03-05 00:49:26.932839 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:49:26.932846 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:49:26.932853 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:49:26.932859 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:49:26.932866 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:49:26.932873 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:49:26.932879 | orchestrator | 2026-03-05 00:49:26.932888 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:49:26.932895 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-05 00:49:26.932902 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-05 00:49:26.932909 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-05 00:49:26.932916 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-05 00:49:26.932922 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-05 00:49:26.932934 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-05 00:49:26.932941 | orchestrator | 2026-03-05 00:49:26.932947 | orchestrator | 2026-03-05 00:49:26.932954 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:49:26.932961 | orchestrator | Thursday 05 March 2026 00:49:23 +0000 (0:00:10.572) 0:01:08.228 ******** 2026-03-05 00:49:26.932967 | orchestrator | =============================================================================== 2026-03-05 00:49:26.932974 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 16.81s 2026-03-05 00:49:26.932980 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.16s 2026-03-05 00:49:26.932987 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.96s 2026-03-05 00:49:26.932993 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.23s 2026-03-05 00:49:26.933000 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 3.42s 2026-03-05 00:49:26.933007 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.40s 2026-03-05 00:49:26.933014 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.31s 2026-03-05 00:49:26.933021 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.83s 2026-03-05 00:49:26.933028 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.46s 2026-03-05 00:49:26.933034 | orchestrator | module-load : Load modules ---------------------------------------------- 2.03s 2026-03-05 00:49:26.933041 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.00s 2026-03-05 00:49:26.933047 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.98s 2026-03-05 00:49:26.933054 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.75s 2026-03-05 00:49:26.933061 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.22s 2026-03-05 00:49:26.933067 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.19s 2026-03-05 00:49:26.933074 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.18s 2026-03-05 00:49:26.933081 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.12s 2026-03-05 00:49:26.933088 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.82s 2026-03-05 00:49:26.933192 | orchestrator | 2026-03-05 00:49:26 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:49:26.933203 | orchestrator | 2026-03-05 00:49:26 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:30.098272 | orchestrator | 2026-03-05 00:49:30 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:49:30.102610 | orchestrator | 2026-03-05 00:49:30 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:49:30.103721 | orchestrator | 2026-03-05 00:49:30 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:49:30.104979 | orchestrator | 2026-03-05 00:49:30 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:49:30.107665 | orchestrator | 2026-03-05 00:49:30 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:49:30.107716 | orchestrator | 2026-03-05 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:33.148138 | orchestrator | 2026-03-05 00:49:33 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:49:33.149118 | orchestrator | 2026-03-05 00:49:33 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:49:33.151316 | orchestrator | 2026-03-05 00:49:33 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:49:33.152442 | orchestrator | 2026-03-05 00:49:33 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:49:33.153845 | orchestrator | 2026-03-05 00:49:33 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:49:33.153888 | orchestrator | 2026-03-05 00:49:33 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:36.192020 | orchestrator | 2026-03-05 00:49:36 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:49:36.192714 | orchestrator | 2026-03-05 00:49:36 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:49:36.194232 | orchestrator | 2026-03-05 00:49:36 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:49:36.196391 | orchestrator | 2026-03-05 00:49:36 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:49:36.197546 | orchestrator | 2026-03-05 00:49:36 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:49:36.197991 | orchestrator | 2026-03-05 00:49:36 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:39.306309 | orchestrator | 2026-03-05 00:49:39 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:49:39.307447 | orchestrator | 2026-03-05 00:49:39 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:49:39.309501 | orchestrator | 2026-03-05 00:49:39 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:49:39.310616 | orchestrator | 2026-03-05 00:49:39 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:49:39.311599 | orchestrator | 2026-03-05 00:49:39 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:49:39.311641 | orchestrator | 2026-03-05 00:49:39 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:42.356789 | orchestrator | 2026-03-05 00:49:42 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:49:42.357601 | orchestrator | 2026-03-05 00:49:42 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:49:42.358574 | orchestrator | 2026-03-05 00:49:42 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:49:42.361259 | orchestrator | 2026-03-05 00:49:42 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:49:42.362296 | orchestrator | 2026-03-05 00:49:42 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:49:42.362331 | orchestrator | 2026-03-05 00:49:42 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:45.407725 | orchestrator | 2026-03-05 00:49:45 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:49:45.409182 | orchestrator | 2026-03-05 00:49:45 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:49:45.413969 | orchestrator | 2026-03-05 00:49:45 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:49:45.417908 | orchestrator | 2026-03-05 00:49:45 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:49:45.419111 | orchestrator | 2026-03-05 00:49:45 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:49:45.419229 | orchestrator | 2026-03-05 00:49:45 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:48.466671 | orchestrator | 2026-03-05 00:49:48 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:49:48.468041 | orchestrator | 2026-03-05 00:49:48 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:49:48.471365 | orchestrator | 2026-03-05 00:49:48 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:49:48.475124 | orchestrator | 2026-03-05 00:49:48 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:49:48.477185 | orchestrator | 2026-03-05 00:49:48 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:49:48.477280 | orchestrator | 2026-03-05 00:49:48 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:51.581337 | orchestrator | 2026-03-05 00:49:51 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:49:51.581939 | orchestrator | 2026-03-05 00:49:51 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:49:51.582989 | orchestrator | 2026-03-05 00:49:51 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:49:51.584125 | orchestrator | 2026-03-05 00:49:51 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:49:51.584949 | orchestrator | 2026-03-05 00:49:51 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:49:51.586831 | orchestrator | 2026-03-05 00:49:51 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:54.689280 | orchestrator | 2026-03-05 00:49:54 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:49:54.690707 | orchestrator | 2026-03-05 00:49:54 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:49:54.692529 | orchestrator | 2026-03-05 00:49:54 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:49:54.693936 | orchestrator | 2026-03-05 00:49:54 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:49:54.696108 | orchestrator | 2026-03-05 00:49:54 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:49:54.696151 | orchestrator | 2026-03-05 00:49:54 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:57.753589 | orchestrator | 2026-03-05 00:49:57 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:49:57.753697 | orchestrator | 2026-03-05 00:49:57 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:49:57.753710 | orchestrator | 2026-03-05 00:49:57 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:49:57.753718 | orchestrator | 2026-03-05 00:49:57 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:49:57.753767 | orchestrator | 2026-03-05 00:49:57 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:49:57.753778 | orchestrator | 2026-03-05 00:49:57 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:00.800342 | orchestrator | 2026-03-05 00:50:00 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:50:00.802150 | orchestrator | 2026-03-05 00:50:00 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:50:00.802682 | orchestrator | 2026-03-05 00:50:00 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:50:00.804410 | orchestrator | 2026-03-05 00:50:00 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:50:00.805652 | orchestrator | 2026-03-05 00:50:00 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:50:00.806088 | orchestrator | 2026-03-05 00:50:00 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:03.849456 | orchestrator | 2026-03-05 00:50:03 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:50:03.851554 | orchestrator | 2026-03-05 00:50:03 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:50:03.857126 | orchestrator | 2026-03-05 00:50:03 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:50:03.860796 | orchestrator | 2026-03-05 00:50:03 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:50:03.862615 | orchestrator | 2026-03-05 00:50:03 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:50:03.862924 | orchestrator | 2026-03-05 00:50:03 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:06.940257 | orchestrator | 2026-03-05 00:50:06 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:50:06.944000 | orchestrator | 2026-03-05 00:50:06 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:50:06.947090 | orchestrator | 2026-03-05 00:50:06 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:50:06.950773 | orchestrator | 2026-03-05 00:50:06 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:50:06.951379 | orchestrator | 2026-03-05 00:50:06 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:50:06.951413 | orchestrator | 2026-03-05 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:10.062089 | orchestrator | 2026-03-05 00:50:10 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:50:10.062855 | orchestrator | 2026-03-05 00:50:10 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:50:10.063654 | orchestrator | 2026-03-05 00:50:10 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:50:10.064503 | orchestrator | 2026-03-05 00:50:10 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:50:10.065440 | orchestrator | 2026-03-05 00:50:10 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:50:10.065511 | orchestrator | 2026-03-05 00:50:10 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:13.179255 | orchestrator | 2026-03-05 00:50:13 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:50:13.184450 | orchestrator | 2026-03-05 00:50:13 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:50:13.186147 | orchestrator | 2026-03-05 00:50:13 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:50:13.187441 | orchestrator | 2026-03-05 00:50:13 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:50:13.188694 | orchestrator | 2026-03-05 00:50:13 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:50:13.188852 | orchestrator | 2026-03-05 00:50:13 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:16.553507 | orchestrator | 2026-03-05 00:50:16 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:50:16.553588 | orchestrator | 2026-03-05 00:50:16 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:50:16.555039 | orchestrator | 2026-03-05 00:50:16 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:50:16.705000 | orchestrator | 2026-03-05 00:50:16 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:50:16.705048 | orchestrator | 2026-03-05 00:50:16 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:50:16.705057 | orchestrator | 2026-03-05 00:50:16 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:19.731320 | orchestrator | 2026-03-05 00:50:19 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:50:19.731400 | orchestrator | 2026-03-05 00:50:19 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:50:19.731414 | orchestrator | 2026-03-05 00:50:19 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:50:19.731426 | orchestrator | 2026-03-05 00:50:19 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:50:19.731438 | orchestrator | 2026-03-05 00:50:19 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:50:19.731449 | orchestrator | 2026-03-05 00:50:19 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:23.107565 | orchestrator | 2026-03-05 00:50:23 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:50:23.107656 | orchestrator | 2026-03-05 00:50:23 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:50:23.107669 | orchestrator | 2026-03-05 00:50:23 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:50:23.107674 | orchestrator | 2026-03-05 00:50:23 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:50:23.107678 | orchestrator | 2026-03-05 00:50:23 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:50:23.107683 | orchestrator | 2026-03-05 00:50:23 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:26.188831 | orchestrator | 2026-03-05 00:50:26 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:50:26.188923 | orchestrator | 2026-03-05 00:50:26 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:50:26.188934 | orchestrator | 2026-03-05 00:50:26 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state STARTED 2026-03-05 00:50:26.188969 | orchestrator | 2026-03-05 00:50:26 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:50:26.189925 | orchestrator | 2026-03-05 00:50:26 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:50:26.191092 | orchestrator | 2026-03-05 00:50:26 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:29.247739 | orchestrator | 2026-03-05 00:50:29 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:50:29.248202 | orchestrator | 2026-03-05 00:50:29 | INFO  | Task cb9d7483-9f79-4de9-b310-65ba5c75c84c is in state STARTED 2026-03-05 00:50:29.248887 | orchestrator | 2026-03-05 00:50:29 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:50:29.250356 | orchestrator | 2026-03-05 00:50:29 | INFO  | Task 8ec781bc-518e-4c54-a414-2695bac829c3 is in state SUCCESS 2026-03-05 00:50:29.254591 | orchestrator | 2026-03-05 00:50:29.254636 | orchestrator | 2026-03-05 00:50:29.254647 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-05 00:50:29.254657 | orchestrator | 2026-03-05 00:50:29.254666 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-05 00:50:29.254675 | orchestrator | Thursday 05 March 2026 00:45:35 +0000 (0:00:00.330) 0:00:00.330 ******** 2026-03-05 00:50:29.254684 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:50:29.254708 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:50:29.254717 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:50:29.254726 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:50:29.254735 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:50:29.254744 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:50:29.254753 | orchestrator | 2026-03-05 00:50:29.254762 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-05 00:50:29.254771 | orchestrator | Thursday 05 March 2026 00:45:36 +0000 (0:00:00.958) 0:00:01.288 ******** 2026-03-05 00:50:29.254780 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:50:29.254790 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:50:29.254799 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:50:29.254806 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.254815 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:50:29.254823 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:50:29.254832 | orchestrator | 2026-03-05 00:50:29.254841 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-05 00:50:29.254850 | orchestrator | Thursday 05 March 2026 00:45:36 +0000 (0:00:00.685) 0:00:01.974 ******** 2026-03-05 00:50:29.254858 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:50:29.254867 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:50:29.254875 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:50:29.254883 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.254891 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:50:29.254900 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:50:29.254908 | orchestrator | 2026-03-05 00:50:29.254917 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-05 00:50:29.254926 | orchestrator | Thursday 05 March 2026 00:45:37 +0000 (0:00:00.675) 0:00:02.649 ******** 2026-03-05 00:50:29.254934 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:50:29.254943 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:50:29.254953 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:50:29.254961 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:50:29.254970 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:50:29.254978 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:50:29.254987 | orchestrator | 2026-03-05 00:50:29.254995 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-05 00:50:29.255004 | orchestrator | Thursday 05 March 2026 00:45:40 +0000 (0:00:02.760) 0:00:05.410 ******** 2026-03-05 00:50:29.255013 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:50:29.255060 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:50:29.255067 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:50:29.255072 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:50:29.255077 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:50:29.255082 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:50:29.255087 | orchestrator | 2026-03-05 00:50:29.255092 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-05 00:50:29.255097 | orchestrator | Thursday 05 March 2026 00:45:41 +0000 (0:00:01.118) 0:00:06.528 ******** 2026-03-05 00:50:29.255103 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:50:29.255108 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:50:29.255113 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:50:29.255118 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:50:29.255123 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:50:29.255128 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:50:29.255133 | orchestrator | 2026-03-05 00:50:29.255138 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-05 00:50:29.255143 | orchestrator | Thursday 05 March 2026 00:45:42 +0000 (0:00:01.002) 0:00:07.531 ******** 2026-03-05 00:50:29.255148 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:50:29.255153 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:50:29.255158 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:50:29.255163 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.255168 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:50:29.255173 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:50:29.255180 | orchestrator | 2026-03-05 00:50:29.255187 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-05 00:50:29.255193 | orchestrator | Thursday 05 March 2026 00:45:43 +0000 (0:00:00.849) 0:00:08.381 ******** 2026-03-05 00:50:29.255198 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:50:29.255204 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:50:29.255211 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:50:29.255217 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.255222 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:50:29.255228 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:50:29.255234 | orchestrator | 2026-03-05 00:50:29.255240 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-05 00:50:29.255246 | orchestrator | Thursday 05 March 2026 00:45:43 +0000 (0:00:00.619) 0:00:09.000 ******** 2026-03-05 00:50:29.255252 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-05 00:50:29.255258 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-05 00:50:29.255264 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:50:29.255270 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-05 00:50:29.255276 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-05 00:50:29.255282 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:50:29.255287 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-05 00:50:29.255298 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-05 00:50:29.255304 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:50:29.255309 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-05 00:50:29.255324 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-05 00:50:29.255330 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.255336 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-05 00:50:29.255341 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-05 00:50:29.255347 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:50:29.255359 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-05 00:50:29.255368 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-05 00:50:29.255373 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:50:29.255379 | orchestrator | 2026-03-05 00:50:29.255385 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-05 00:50:29.255390 | orchestrator | Thursday 05 March 2026 00:45:44 +0000 (0:00:00.615) 0:00:09.616 ******** 2026-03-05 00:50:29.255396 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:50:29.255401 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:50:29.255407 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:50:29.255412 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.255417 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:50:29.255423 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:50:29.255428 | orchestrator | 2026-03-05 00:50:29.255434 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-05 00:50:29.255441 | orchestrator | Thursday 05 March 2026 00:45:45 +0000 (0:00:01.210) 0:00:10.827 ******** 2026-03-05 00:50:29.255446 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:50:29.255452 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:50:29.255458 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:50:29.255463 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:50:29.255468 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:50:29.255473 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:50:29.255477 | orchestrator | 2026-03-05 00:50:29.255482 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-05 00:50:29.255487 | orchestrator | Thursday 05 March 2026 00:45:46 +0000 (0:00:00.856) 0:00:11.683 ******** 2026-03-05 00:50:29.255492 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:50:29.255497 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:50:29.255502 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:50:29.255507 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:50:29.255511 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:50:29.255516 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:50:29.255521 | orchestrator | 2026-03-05 00:50:29.255526 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-05 00:50:29.255531 | orchestrator | Thursday 05 March 2026 00:45:52 +0000 (0:00:05.603) 0:00:17.287 ******** 2026-03-05 00:50:29.255536 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:50:29.255540 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:50:29.255545 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:50:29.255550 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.255555 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:50:29.255559 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:50:29.255564 | orchestrator | 2026-03-05 00:50:29.255569 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-05 00:50:29.255574 | orchestrator | Thursday 05 March 2026 00:45:54 +0000 (0:00:02.215) 0:00:19.502 ******** 2026-03-05 00:50:29.255579 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:50:29.255584 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:50:29.255588 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:50:29.255593 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.255598 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:50:29.255603 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:50:29.255608 | orchestrator | 2026-03-05 00:50:29.255613 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-05 00:50:29.255619 | orchestrator | Thursday 05 March 2026 00:45:57 +0000 (0:00:02.708) 0:00:22.211 ******** 2026-03-05 00:50:29.255623 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:50:29.255628 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:50:29.255633 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:50:29.255638 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.255642 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:50:29.255650 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:50:29.255655 | orchestrator | 2026-03-05 00:50:29.255660 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-05 00:50:29.255665 | orchestrator | Thursday 05 March 2026 00:45:58 +0000 (0:00:01.147) 0:00:23.359 ******** 2026-03-05 00:50:29.255670 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-05 00:50:29.255675 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-05 00:50:29.255679 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:50:29.255684 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-05 00:50:29.255720 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-05 00:50:29.255726 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:50:29.255731 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-05 00:50:29.255735 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-05 00:50:29.255740 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:50:29.255745 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-05 00:50:29.255750 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-05 00:50:29.255755 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.255759 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-05 00:50:29.255764 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-05 00:50:29.255769 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:50:29.255774 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-05 00:50:29.255782 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-05 00:50:29.255787 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:50:29.255791 | orchestrator | 2026-03-05 00:50:29.255796 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-05 00:50:29.255805 | orchestrator | Thursday 05 March 2026 00:46:01 +0000 (0:00:03.674) 0:00:27.034 ******** 2026-03-05 00:50:29.255810 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:50:29.255815 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:50:29.255819 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:50:29.255824 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.255829 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:50:29.255834 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:50:29.255839 | orchestrator | 2026-03-05 00:50:29.255844 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-05 00:50:29.255849 | orchestrator | Thursday 05 March 2026 00:46:03 +0000 (0:00:01.700) 0:00:28.734 ******** 2026-03-05 00:50:29.255853 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:50:29.255858 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:50:29.255863 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:50:29.255868 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.255873 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:50:29.255877 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:50:29.255882 | orchestrator | 2026-03-05 00:50:29.255887 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-05 00:50:29.255892 | orchestrator | 2026-03-05 00:50:29.255897 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-05 00:50:29.255902 | orchestrator | Thursday 05 March 2026 00:46:06 +0000 (0:00:02.525) 0:00:31.259 ******** 2026-03-05 00:50:29.255907 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:50:29.255911 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:50:29.255916 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:50:29.255921 | orchestrator | 2026-03-05 00:50:29.255926 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-05 00:50:29.255931 | orchestrator | Thursday 05 March 2026 00:46:08 +0000 (0:00:02.738) 0:00:34.000 ******** 2026-03-05 00:50:29.255936 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:50:29.255941 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:50:29.255945 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:50:29.255953 | orchestrator | 2026-03-05 00:50:29.255965 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-05 00:50:29.255971 | orchestrator | Thursday 05 March 2026 00:46:10 +0000 (0:00:01.964) 0:00:35.965 ******** 2026-03-05 00:50:29.255975 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:50:29.255980 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:50:29.255990 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:50:29.255995 | orchestrator | 2026-03-05 00:50:29.256000 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-05 00:50:29.256004 | orchestrator | Thursday 05 March 2026 00:46:11 +0000 (0:00:01.133) 0:00:37.099 ******** 2026-03-05 00:50:29.256009 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:50:29.256014 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:50:29.256019 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:50:29.256024 | orchestrator | 2026-03-05 00:50:29.256028 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-05 00:50:29.256033 | orchestrator | Thursday 05 March 2026 00:46:12 +0000 (0:00:00.953) 0:00:38.052 ******** 2026-03-05 00:50:29.256038 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.256043 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:50:29.256048 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:50:29.256053 | orchestrator | 2026-03-05 00:50:29.256058 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-05 00:50:29.256063 | orchestrator | Thursday 05 March 2026 00:46:13 +0000 (0:00:01.023) 0:00:39.076 ******** 2026-03-05 00:50:29.256067 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:50:29.256072 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:50:29.256077 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:50:29.256082 | orchestrator | 2026-03-05 00:50:29.256087 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-05 00:50:29.256092 | orchestrator | Thursday 05 March 2026 00:46:15 +0000 (0:00:01.151) 0:00:40.228 ******** 2026-03-05 00:50:29.256096 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:50:29.256101 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:50:29.256106 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:50:29.256111 | orchestrator | 2026-03-05 00:50:29.256116 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-05 00:50:29.256121 | orchestrator | Thursday 05 March 2026 00:46:16 +0000 (0:00:01.864) 0:00:42.093 ******** 2026-03-05 00:50:29.256126 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:50:29.256131 | orchestrator | 2026-03-05 00:50:29.256135 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-05 00:50:29.256140 | orchestrator | Thursday 05 March 2026 00:46:17 +0000 (0:00:00.898) 0:00:42.992 ******** 2026-03-05 00:50:29.256145 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:50:29.256150 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:50:29.256155 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:50:29.256160 | orchestrator | 2026-03-05 00:50:29.256165 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-05 00:50:29.256169 | orchestrator | Thursday 05 March 2026 00:46:20 +0000 (0:00:02.125) 0:00:45.117 ******** 2026-03-05 00:50:29.256174 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:50:29.256179 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:50:29.256184 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:50:29.256189 | orchestrator | 2026-03-05 00:50:29.256193 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-05 00:50:29.256198 | orchestrator | Thursday 05 March 2026 00:46:20 +0000 (0:00:00.805) 0:00:45.923 ******** 2026-03-05 00:50:29.256203 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:50:29.256208 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:50:29.256213 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:50:29.256217 | orchestrator | 2026-03-05 00:50:29.256224 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-05 00:50:29.256236 | orchestrator | Thursday 05 March 2026 00:46:21 +0000 (0:00:00.942) 0:00:46.865 ******** 2026-03-05 00:50:29.256244 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:50:29.256253 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:50:29.256261 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:50:29.256268 | orchestrator | 2026-03-05 00:50:29.256276 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-05 00:50:29.256288 | orchestrator | Thursday 05 March 2026 00:46:23 +0000 (0:00:01.788) 0:00:48.653 ******** 2026-03-05 00:50:29.256296 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.256304 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:50:29.256312 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:50:29.256320 | orchestrator | 2026-03-05 00:50:29.256328 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-05 00:50:29.256336 | orchestrator | Thursday 05 March 2026 00:46:24 +0000 (0:00:00.607) 0:00:49.261 ******** 2026-03-05 00:50:29.256344 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.256352 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:50:29.256360 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:50:29.256368 | orchestrator | 2026-03-05 00:50:29.256377 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-05 00:50:29.256385 | orchestrator | Thursday 05 March 2026 00:46:24 +0000 (0:00:00.524) 0:00:49.785 ******** 2026-03-05 00:50:29.256393 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:50:29.256401 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:50:29.256410 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:50:29.256418 | orchestrator | 2026-03-05 00:50:29.256427 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-05 00:50:29.256435 | orchestrator | Thursday 05 March 2026 00:46:27 +0000 (0:00:02.787) 0:00:52.573 ******** 2026-03-05 00:50:29.256444 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:50:29.256453 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:50:29.256461 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:50:29.256470 | orchestrator | 2026-03-05 00:50:29.256477 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-05 00:50:29.256486 | orchestrator | Thursday 05 March 2026 00:46:30 +0000 (0:00:02.970) 0:00:55.543 ******** 2026-03-05 00:50:29.256494 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:50:29.256502 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:50:29.256511 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:50:29.256519 | orchestrator | 2026-03-05 00:50:29.256528 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-05 00:50:29.256536 | orchestrator | Thursday 05 March 2026 00:46:32 +0000 (0:00:01.570) 0:00:57.113 ******** 2026-03-05 00:50:29.256545 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-05 00:50:29.256553 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-05 00:50:29.256561 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-05 00:50:29.256570 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-05 00:50:29.256578 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-05 00:50:29.256586 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-05 00:50:29.256595 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-05 00:50:29.256603 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-05 00:50:29.256617 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-05 00:50:29.256622 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-05 00:50:29.256956 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-05 00:50:29.256968 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-05 00:50:29.256973 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:50:29.256978 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:50:29.256983 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:50:29.256987 | orchestrator | 2026-03-05 00:50:29.256992 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-05 00:50:29.256997 | orchestrator | Thursday 05 March 2026 00:47:15 +0000 (0:00:43.762) 0:01:40.876 ******** 2026-03-05 00:50:29.257002 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.257007 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:50:29.257012 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:50:29.257017 | orchestrator | 2026-03-05 00:50:29.257021 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-05 00:50:29.257026 | orchestrator | Thursday 05 March 2026 00:47:16 +0000 (0:00:00.819) 0:01:41.695 ******** 2026-03-05 00:50:29.257031 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:50:29.257036 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:50:29.257041 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:50:29.257045 | orchestrator | 2026-03-05 00:50:29.257050 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-05 00:50:29.257055 | orchestrator | Thursday 05 March 2026 00:47:17 +0000 (0:00:01.242) 0:01:42.938 ******** 2026-03-05 00:50:29.257060 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:50:29.257065 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:50:29.257069 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:50:29.257074 | orchestrator | 2026-03-05 00:50:29.257084 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-05 00:50:29.257092 | orchestrator | Thursday 05 March 2026 00:47:20 +0000 (0:00:02.775) 0:01:45.714 ******** 2026-03-05 00:50:29.257097 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:50:29.257102 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:50:29.257107 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:50:29.257111 | orchestrator | 2026-03-05 00:50:29.257116 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-05 00:50:29.257121 | orchestrator | Thursday 05 March 2026 00:47:44 +0000 (0:00:24.014) 0:02:09.728 ******** 2026-03-05 00:50:29.257126 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:50:29.257131 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:50:29.257136 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:50:29.257140 | orchestrator | 2026-03-05 00:50:29.257145 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-05 00:50:29.257150 | orchestrator | Thursday 05 March 2026 00:47:45 +0000 (0:00:00.975) 0:02:10.704 ******** 2026-03-05 00:50:29.257155 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:50:29.257160 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:50:29.257165 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:50:29.257169 | orchestrator | 2026-03-05 00:50:29.257174 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-05 00:50:29.257179 | orchestrator | Thursday 05 March 2026 00:47:46 +0000 (0:00:00.625) 0:02:11.329 ******** 2026-03-05 00:50:29.257184 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:50:29.257189 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:50:29.257193 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:50:29.257198 | orchestrator | 2026-03-05 00:50:29.257208 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-05 00:50:29.257216 | orchestrator | Thursday 05 March 2026 00:47:47 +0000 (0:00:00.824) 0:02:12.154 ******** 2026-03-05 00:50:29.257223 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:50:29.257230 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:50:29.257237 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:50:29.257245 | orchestrator | 2026-03-05 00:50:29.257253 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-05 00:50:29.257261 | orchestrator | Thursday 05 March 2026 00:47:47 +0000 (0:00:00.673) 0:02:12.827 ******** 2026-03-05 00:50:29.257268 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:50:29.257308 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:50:29.257317 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:50:29.257325 | orchestrator | 2026-03-05 00:50:29.257333 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-05 00:50:29.257341 | orchestrator | Thursday 05 March 2026 00:47:48 +0000 (0:00:00.293) 0:02:13.120 ******** 2026-03-05 00:50:29.257346 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:50:29.257351 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:50:29.257356 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:50:29.257361 | orchestrator | 2026-03-05 00:50:29.257366 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-05 00:50:29.257371 | orchestrator | Thursday 05 March 2026 00:47:48 +0000 (0:00:00.672) 0:02:13.793 ******** 2026-03-05 00:50:29.257375 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:50:29.257380 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:50:29.257385 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:50:29.257390 | orchestrator | 2026-03-05 00:50:29.257395 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-05 00:50:29.257399 | orchestrator | Thursday 05 March 2026 00:47:49 +0000 (0:00:00.752) 0:02:14.545 ******** 2026-03-05 00:50:29.257404 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:50:29.257409 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:50:29.257414 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:50:29.257419 | orchestrator | 2026-03-05 00:50:29.257423 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-05 00:50:29.257428 | orchestrator | Thursday 05 March 2026 00:47:50 +0000 (0:00:01.277) 0:02:15.823 ******** 2026-03-05 00:50:29.257433 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:50:29.257438 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:50:29.257443 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:50:29.257447 | orchestrator | 2026-03-05 00:50:29.257477 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-05 00:50:29.257482 | orchestrator | Thursday 05 March 2026 00:47:51 +0000 (0:00:00.815) 0:02:16.638 ******** 2026-03-05 00:50:29.257487 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.257492 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:50:29.257497 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:50:29.257501 | orchestrator | 2026-03-05 00:50:29.257506 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-05 00:50:29.257511 | orchestrator | Thursday 05 March 2026 00:47:51 +0000 (0:00:00.275) 0:02:16.914 ******** 2026-03-05 00:50:29.257516 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.257521 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:50:29.257525 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:50:29.257530 | orchestrator | 2026-03-05 00:50:29.257535 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-05 00:50:29.257540 | orchestrator | Thursday 05 March 2026 00:47:52 +0000 (0:00:00.307) 0:02:17.221 ******** 2026-03-05 00:50:29.257545 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:50:29.257550 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:50:29.257555 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:50:29.257559 | orchestrator | 2026-03-05 00:50:29.257564 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-05 00:50:29.257574 | orchestrator | Thursday 05 March 2026 00:47:53 +0000 (0:00:01.017) 0:02:18.239 ******** 2026-03-05 00:50:29.257579 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:50:29.257584 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:50:29.257589 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:50:29.257594 | orchestrator | 2026-03-05 00:50:29.257599 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-05 00:50:29.257604 | orchestrator | Thursday 05 March 2026 00:47:53 +0000 (0:00:00.666) 0:02:18.906 ******** 2026-03-05 00:50:29.257609 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-05 00:50:29.257619 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-05 00:50:29.257628 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-05 00:50:29.257633 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-05 00:50:29.257637 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-05 00:50:29.257642 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-05 00:50:29.257647 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-05 00:50:29.257652 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-05 00:50:29.257657 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-05 00:50:29.257661 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-05 00:50:29.257666 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-05 00:50:29.257671 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-05 00:50:29.257676 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-05 00:50:29.257680 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-05 00:50:29.257685 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-05 00:50:29.257705 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-05 00:50:29.257713 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-05 00:50:29.257722 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-05 00:50:29.257727 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-05 00:50:29.257731 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-05 00:50:29.257736 | orchestrator | 2026-03-05 00:50:29.257741 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-05 00:50:29.257746 | orchestrator | 2026-03-05 00:50:29.257751 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-05 00:50:29.257756 | orchestrator | Thursday 05 March 2026 00:47:56 +0000 (0:00:03.112) 0:02:22.019 ******** 2026-03-05 00:50:29.257761 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:50:29.257766 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:50:29.257771 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:50:29.257775 | orchestrator | 2026-03-05 00:50:29.257780 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-05 00:50:29.257785 | orchestrator | Thursday 05 March 2026 00:47:57 +0000 (0:00:00.682) 0:02:22.701 ******** 2026-03-05 00:50:29.257790 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:50:29.257795 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:50:29.257807 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:50:29.257814 | orchestrator | 2026-03-05 00:50:29.257820 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-05 00:50:29.257827 | orchestrator | Thursday 05 March 2026 00:47:58 +0000 (0:00:00.668) 0:02:23.370 ******** 2026-03-05 00:50:29.257834 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:50:29.257843 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:50:29.257854 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:50:29.257862 | orchestrator | 2026-03-05 00:50:29.257870 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-05 00:50:29.257879 | orchestrator | Thursday 05 March 2026 00:47:58 +0000 (0:00:00.443) 0:02:23.813 ******** 2026-03-05 00:50:29.257887 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:50:29.257896 | orchestrator | 2026-03-05 00:50:29.257903 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-05 00:50:29.257908 | orchestrator | Thursday 05 March 2026 00:47:59 +0000 (0:00:01.027) 0:02:24.840 ******** 2026-03-05 00:50:29.257914 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:50:29.257918 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:50:29.257923 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:50:29.257928 | orchestrator | 2026-03-05 00:50:29.257933 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-05 00:50:29.257938 | orchestrator | Thursday 05 March 2026 00:48:00 +0000 (0:00:00.380) 0:02:25.221 ******** 2026-03-05 00:50:29.257942 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:50:29.257949 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:50:29.257957 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:50:29.257965 | orchestrator | 2026-03-05 00:50:29.257974 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-05 00:50:29.257982 | orchestrator | Thursday 05 March 2026 00:48:00 +0000 (0:00:00.347) 0:02:25.568 ******** 2026-03-05 00:50:29.258045 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:50:29.258058 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:50:29.258067 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:50:29.258072 | orchestrator | 2026-03-05 00:50:29.258076 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-05 00:50:29.258081 | orchestrator | Thursday 05 March 2026 00:48:00 +0000 (0:00:00.350) 0:02:25.918 ******** 2026-03-05 00:50:29.258086 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:50:29.258093 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:50:29.258101 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:50:29.258110 | orchestrator | 2026-03-05 00:50:29.258156 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-05 00:50:29.258170 | orchestrator | Thursday 05 March 2026 00:48:01 +0000 (0:00:00.909) 0:02:26.828 ******** 2026-03-05 00:50:29.258175 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:50:29.258183 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:50:29.258191 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:50:29.258199 | orchestrator | 2026-03-05 00:50:29.258208 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-05 00:50:29.258217 | orchestrator | Thursday 05 March 2026 00:48:03 +0000 (0:00:01.363) 0:02:28.191 ******** 2026-03-05 00:50:29.258225 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:50:29.258232 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:50:29.258237 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:50:29.258241 | orchestrator | 2026-03-05 00:50:29.258246 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-05 00:50:29.258251 | orchestrator | Thursday 05 March 2026 00:48:04 +0000 (0:00:01.416) 0:02:29.608 ******** 2026-03-05 00:50:29.258256 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:50:29.258261 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:50:29.258266 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:50:29.258290 | orchestrator | 2026-03-05 00:50:29.258300 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-05 00:50:29.258305 | orchestrator | 2026-03-05 00:50:29.258310 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-05 00:50:29.258315 | orchestrator | Thursday 05 March 2026 00:48:15 +0000 (0:00:10.710) 0:02:40.318 ******** 2026-03-05 00:50:29.258320 | orchestrator | ok: [testbed-manager] 2026-03-05 00:50:29.258325 | orchestrator | 2026-03-05 00:50:29.258330 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-05 00:50:29.258334 | orchestrator | Thursday 05 March 2026 00:48:16 +0000 (0:00:00.919) 0:02:41.238 ******** 2026-03-05 00:50:29.258339 | orchestrator | changed: [testbed-manager] 2026-03-05 00:50:29.258344 | orchestrator | 2026-03-05 00:50:29.258349 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-05 00:50:29.258354 | orchestrator | Thursday 05 March 2026 00:48:16 +0000 (0:00:00.560) 0:02:41.798 ******** 2026-03-05 00:50:29.258359 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-05 00:50:29.258363 | orchestrator | 2026-03-05 00:50:29.258368 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-05 00:50:29.258373 | orchestrator | Thursday 05 March 2026 00:48:17 +0000 (0:00:00.544) 0:02:42.343 ******** 2026-03-05 00:50:29.258378 | orchestrator | changed: [testbed-manager] 2026-03-05 00:50:29.258382 | orchestrator | 2026-03-05 00:50:29.258387 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-05 00:50:29.258392 | orchestrator | Thursday 05 March 2026 00:48:18 +0000 (0:00:01.058) 0:02:43.401 ******** 2026-03-05 00:50:29.258397 | orchestrator | changed: [testbed-manager] 2026-03-05 00:50:29.258402 | orchestrator | 2026-03-05 00:50:29.258406 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-05 00:50:29.258411 | orchestrator | Thursday 05 March 2026 00:48:18 +0000 (0:00:00.692) 0:02:44.094 ******** 2026-03-05 00:50:29.258416 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-05 00:50:29.258421 | orchestrator | 2026-03-05 00:50:29.258426 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-05 00:50:29.258431 | orchestrator | Thursday 05 March 2026 00:48:20 +0000 (0:00:01.812) 0:02:45.907 ******** 2026-03-05 00:50:29.258435 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-05 00:50:29.258440 | orchestrator | 2026-03-05 00:50:29.258445 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-05 00:50:29.258450 | orchestrator | Thursday 05 March 2026 00:48:21 +0000 (0:00:01.096) 0:02:47.004 ******** 2026-03-05 00:50:29.258455 | orchestrator | changed: [testbed-manager] 2026-03-05 00:50:29.258459 | orchestrator | 2026-03-05 00:50:29.258464 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-05 00:50:29.258469 | orchestrator | Thursday 05 March 2026 00:48:22 +0000 (0:00:00.739) 0:02:47.743 ******** 2026-03-05 00:50:29.258474 | orchestrator | changed: [testbed-manager] 2026-03-05 00:50:29.258479 | orchestrator | 2026-03-05 00:50:29.258484 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-05 00:50:29.258489 | orchestrator | 2026-03-05 00:50:29.258494 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-05 00:50:29.258499 | orchestrator | Thursday 05 March 2026 00:48:23 +0000 (0:00:00.617) 0:02:48.361 ******** 2026-03-05 00:50:29.258503 | orchestrator | ok: [testbed-manager] 2026-03-05 00:50:29.258508 | orchestrator | 2026-03-05 00:50:29.258513 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-05 00:50:29.258518 | orchestrator | Thursday 05 March 2026 00:48:23 +0000 (0:00:00.159) 0:02:48.521 ******** 2026-03-05 00:50:29.258523 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-05 00:50:29.258528 | orchestrator | 2026-03-05 00:50:29.258533 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-05 00:50:29.258538 | orchestrator | Thursday 05 March 2026 00:48:23 +0000 (0:00:00.248) 0:02:48.769 ******** 2026-03-05 00:50:29.258545 | orchestrator | ok: [testbed-manager] 2026-03-05 00:50:29.258550 | orchestrator | 2026-03-05 00:50:29.258555 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-05 00:50:29.258560 | orchestrator | Thursday 05 March 2026 00:48:24 +0000 (0:00:01.034) 0:02:49.803 ******** 2026-03-05 00:50:29.258565 | orchestrator | ok: [testbed-manager] 2026-03-05 00:50:29.258570 | orchestrator | 2026-03-05 00:50:29.258574 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-05 00:50:29.258579 | orchestrator | Thursday 05 March 2026 00:48:26 +0000 (0:00:01.866) 0:02:51.669 ******** 2026-03-05 00:50:29.258584 | orchestrator | changed: [testbed-manager] 2026-03-05 00:50:29.258589 | orchestrator | 2026-03-05 00:50:29.258594 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-05 00:50:29.258599 | orchestrator | Thursday 05 March 2026 00:48:27 +0000 (0:00:00.936) 0:02:52.606 ******** 2026-03-05 00:50:29.258604 | orchestrator | ok: [testbed-manager] 2026-03-05 00:50:29.258609 | orchestrator | 2026-03-05 00:50:29.258618 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-05 00:50:29.258626 | orchestrator | Thursday 05 March 2026 00:48:28 +0000 (0:00:00.507) 0:02:53.113 ******** 2026-03-05 00:50:29.258631 | orchestrator | changed: [testbed-manager] 2026-03-05 00:50:29.258636 | orchestrator | 2026-03-05 00:50:29.258641 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-05 00:50:29.258646 | orchestrator | Thursday 05 March 2026 00:48:37 +0000 (0:00:09.485) 0:03:02.599 ******** 2026-03-05 00:50:29.258651 | orchestrator | changed: [testbed-manager] 2026-03-05 00:50:29.258656 | orchestrator | 2026-03-05 00:50:29.258661 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-05 00:50:29.258666 | orchestrator | Thursday 05 March 2026 00:48:52 +0000 (0:00:14.570) 0:03:17.169 ******** 2026-03-05 00:50:29.258671 | orchestrator | ok: [testbed-manager] 2026-03-05 00:50:29.258676 | orchestrator | 2026-03-05 00:50:29.258680 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-05 00:50:29.258685 | orchestrator | 2026-03-05 00:50:29.258710 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-05 00:50:29.258719 | orchestrator | Thursday 05 March 2026 00:48:52 +0000 (0:00:00.622) 0:03:17.792 ******** 2026-03-05 00:50:29.258728 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:50:29.258737 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:50:29.258745 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:50:29.258752 | orchestrator | 2026-03-05 00:50:29.258757 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-05 00:50:29.258762 | orchestrator | Thursday 05 March 2026 00:48:53 +0000 (0:00:00.388) 0:03:18.180 ******** 2026-03-05 00:50:29.258767 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.258772 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:50:29.258777 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:50:29.258782 | orchestrator | 2026-03-05 00:50:29.258787 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-05 00:50:29.258792 | orchestrator | Thursday 05 March 2026 00:48:53 +0000 (0:00:00.381) 0:03:18.561 ******** 2026-03-05 00:50:29.258796 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:50:29.258801 | orchestrator | 2026-03-05 00:50:29.258806 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-05 00:50:29.258811 | orchestrator | Thursday 05 March 2026 00:48:54 +0000 (0:00:00.941) 0:03:19.504 ******** 2026-03-05 00:50:29.258816 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-05 00:50:29.258821 | orchestrator | 2026-03-05 00:50:29.258826 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-05 00:50:29.258831 | orchestrator | Thursday 05 March 2026 00:48:55 +0000 (0:00:01.189) 0:03:20.693 ******** 2026-03-05 00:50:29.258836 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 00:50:29.258841 | orchestrator | 2026-03-05 00:50:29.258845 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-05 00:50:29.258855 | orchestrator | Thursday 05 March 2026 00:48:56 +0000 (0:00:01.074) 0:03:21.768 ******** 2026-03-05 00:50:29.258860 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.258864 | orchestrator | 2026-03-05 00:50:29.258869 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-05 00:50:29.258874 | orchestrator | Thursday 05 March 2026 00:48:56 +0000 (0:00:00.139) 0:03:21.907 ******** 2026-03-05 00:50:29.258879 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 00:50:29.258884 | orchestrator | 2026-03-05 00:50:29.258889 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-05 00:50:29.258894 | orchestrator | Thursday 05 March 2026 00:48:58 +0000 (0:00:01.419) 0:03:23.327 ******** 2026-03-05 00:50:29.258898 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.258903 | orchestrator | 2026-03-05 00:50:29.258908 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-05 00:50:29.258913 | orchestrator | Thursday 05 March 2026 00:48:58 +0000 (0:00:00.132) 0:03:23.460 ******** 2026-03-05 00:50:29.258918 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.258922 | orchestrator | 2026-03-05 00:50:29.258927 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-05 00:50:29.258932 | orchestrator | Thursday 05 March 2026 00:48:58 +0000 (0:00:00.137) 0:03:23.597 ******** 2026-03-05 00:50:29.258937 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.258942 | orchestrator | 2026-03-05 00:50:29.258947 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-05 00:50:29.258952 | orchestrator | Thursday 05 March 2026 00:48:58 +0000 (0:00:00.156) 0:03:23.753 ******** 2026-03-05 00:50:29.258956 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.258961 | orchestrator | 2026-03-05 00:50:29.258966 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-05 00:50:29.258971 | orchestrator | Thursday 05 March 2026 00:48:59 +0000 (0:00:00.433) 0:03:24.187 ******** 2026-03-05 00:50:29.258976 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-05 00:50:29.258981 | orchestrator | 2026-03-05 00:50:29.258986 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-05 00:50:29.258990 | orchestrator | Thursday 05 March 2026 00:49:05 +0000 (0:00:06.467) 0:03:30.655 ******** 2026-03-05 00:50:29.258995 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-05 00:50:29.259000 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-05 00:50:29.259005 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-05 00:50:29.259011 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-05 00:50:29.259016 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-05 00:50:29.259020 | orchestrator | 2026-03-05 00:50:29.259025 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-05 00:50:29.259030 | orchestrator | Thursday 05 March 2026 00:49:49 +0000 (0:00:43.508) 0:04:14.164 ******** 2026-03-05 00:50:29.259039 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 00:50:29.259044 | orchestrator | 2026-03-05 00:50:29.259052 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-05 00:50:29.259057 | orchestrator | Thursday 05 March 2026 00:49:50 +0000 (0:00:01.434) 0:04:15.598 ******** 2026-03-05 00:50:29.259062 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-05 00:50:29.259067 | orchestrator | 2026-03-05 00:50:29.259071 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-05 00:50:29.259076 | orchestrator | Thursday 05 March 2026 00:49:52 +0000 (0:00:02.249) 0:04:17.848 ******** 2026-03-05 00:50:29.259081 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-05 00:50:29.259086 | orchestrator | 2026-03-05 00:50:29.259094 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-05 00:50:29.259106 | orchestrator | Thursday 05 March 2026 00:49:53 +0000 (0:00:01.231) 0:04:19.079 ******** 2026-03-05 00:50:29.259114 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.259120 | orchestrator | 2026-03-05 00:50:29.259128 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-05 00:50:29.259135 | orchestrator | Thursday 05 March 2026 00:49:54 +0000 (0:00:00.121) 0:04:19.201 ******** 2026-03-05 00:50:29.259142 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-05 00:50:29.259150 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-05 00:50:29.259158 | orchestrator | 2026-03-05 00:50:29.259166 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-05 00:50:29.259174 | orchestrator | Thursday 05 March 2026 00:49:56 +0000 (0:00:02.329) 0:04:21.530 ******** 2026-03-05 00:50:29.259183 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.259191 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:50:29.259199 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:50:29.259204 | orchestrator | 2026-03-05 00:50:29.259208 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-05 00:50:29.259213 | orchestrator | Thursday 05 March 2026 00:49:56 +0000 (0:00:00.474) 0:04:22.005 ******** 2026-03-05 00:50:29.259218 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:50:29.259223 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:50:29.259228 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:50:29.259232 | orchestrator | 2026-03-05 00:50:29.259237 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-05 00:50:29.259242 | orchestrator | 2026-03-05 00:50:29.259247 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-05 00:50:29.259251 | orchestrator | Thursday 05 March 2026 00:49:58 +0000 (0:00:01.172) 0:04:23.178 ******** 2026-03-05 00:50:29.259256 | orchestrator | ok: [testbed-manager] 2026-03-05 00:50:29.259261 | orchestrator | 2026-03-05 00:50:29.259266 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-05 00:50:29.259271 | orchestrator | Thursday 05 March 2026 00:49:58 +0000 (0:00:00.149) 0:04:23.327 ******** 2026-03-05 00:50:29.259275 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-05 00:50:29.259280 | orchestrator | 2026-03-05 00:50:29.259285 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-05 00:50:29.259290 | orchestrator | Thursday 05 March 2026 00:49:58 +0000 (0:00:00.220) 0:04:23.548 ******** 2026-03-05 00:50:29.259295 | orchestrator | changed: [testbed-manager] 2026-03-05 00:50:29.259299 | orchestrator | 2026-03-05 00:50:29.259304 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-05 00:50:29.259309 | orchestrator | 2026-03-05 00:50:29.259314 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-05 00:50:29.259319 | orchestrator | Thursday 05 March 2026 00:50:05 +0000 (0:00:06.668) 0:04:30.216 ******** 2026-03-05 00:50:29.259324 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:50:29.259329 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:50:29.259333 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:50:29.259338 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:50:29.259343 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:50:29.259348 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:50:29.259352 | orchestrator | 2026-03-05 00:50:29.259357 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-05 00:50:29.259362 | orchestrator | Thursday 05 March 2026 00:50:06 +0000 (0:00:01.042) 0:04:31.259 ******** 2026-03-05 00:50:29.259367 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-05 00:50:29.259372 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-05 00:50:29.259377 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-05 00:50:29.259385 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-05 00:50:29.259390 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-05 00:50:29.259395 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-05 00:50:29.259400 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-05 00:50:29.259405 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-05 00:50:29.259409 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-05 00:50:29.259414 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-05 00:50:29.259419 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-05 00:50:29.259424 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-05 00:50:29.259433 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-05 00:50:29.259441 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-05 00:50:29.259446 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-05 00:50:29.259451 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-05 00:50:29.259456 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-05 00:50:29.259461 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-05 00:50:29.259466 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-05 00:50:29.259471 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-05 00:50:29.259475 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-05 00:50:29.259480 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-05 00:50:29.259485 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-05 00:50:29.259490 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-05 00:50:29.259494 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-05 00:50:29.259499 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-05 00:50:29.259504 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-05 00:50:29.259509 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-05 00:50:29.259513 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-05 00:50:29.259518 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-05 00:50:29.259523 | orchestrator | 2026-03-05 00:50:29.259528 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-05 00:50:29.259533 | orchestrator | Thursday 05 March 2026 00:50:26 +0000 (0:00:20.209) 0:04:51.468 ******** 2026-03-05 00:50:29.259538 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:50:29.259543 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:50:29.259548 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:50:29.259552 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.259557 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:50:29.259562 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:50:29.259567 | orchestrator | 2026-03-05 00:50:29.259572 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-05 00:50:29.259576 | orchestrator | Thursday 05 March 2026 00:50:27 +0000 (0:00:00.806) 0:04:52.275 ******** 2026-03-05 00:50:29.259585 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:50:29.259590 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:50:29.259595 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:50:29.259599 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:50:29.259604 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:50:29.259609 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:50:29.259614 | orchestrator | 2026-03-05 00:50:29.259619 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:50:29.259624 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:50:29.259630 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-05 00:50:29.259635 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-05 00:50:29.259640 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-05 00:50:29.259646 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-05 00:50:29.259651 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-05 00:50:29.259655 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-05 00:50:29.259660 | orchestrator | 2026-03-05 00:50:29.259665 | orchestrator | 2026-03-05 00:50:29.259670 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:50:29.259675 | orchestrator | Thursday 05 March 2026 00:50:27 +0000 (0:00:00.429) 0:04:52.704 ******** 2026-03-05 00:50:29.259680 | orchestrator | =============================================================================== 2026-03-05 00:50:29.259685 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.76s 2026-03-05 00:50:29.259728 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 43.51s 2026-03-05 00:50:29.259736 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.01s 2026-03-05 00:50:29.259748 | orchestrator | Manage labels ---------------------------------------------------------- 20.21s 2026-03-05 00:50:29.259756 | orchestrator | kubectl : Install required packages ------------------------------------ 14.57s 2026-03-05 00:50:29.259761 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.71s 2026-03-05 00:50:29.259766 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 9.49s 2026-03-05 00:50:29.259771 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.67s 2026-03-05 00:50:29.259776 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 6.47s 2026-03-05 00:50:29.259781 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.60s 2026-03-05 00:50:29.259786 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 3.67s 2026-03-05 00:50:29.259791 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.11s 2026-03-05 00:50:29.259796 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.97s 2026-03-05 00:50:29.259801 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.79s 2026-03-05 00:50:29.259805 | orchestrator | k3s_server : Copy K3s service file -------------------------------------- 2.78s 2026-03-05 00:50:29.259810 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.76s 2026-03-05 00:50:29.259819 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.74s 2026-03-05 00:50:29.259824 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.71s 2026-03-05 00:50:29.259829 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.52s 2026-03-05 00:50:29.259834 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.33s 2026-03-05 00:50:29.259839 | orchestrator | 2026-03-05 00:50:29 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:50:29.259844 | orchestrator | 2026-03-05 00:50:29 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:50:29.259849 | orchestrator | 2026-03-05 00:50:29 | INFO  | Task 3aad7a31-dd4a-4ee0-ad53-08447840e183 is in state STARTED 2026-03-05 00:50:29.259854 | orchestrator | 2026-03-05 00:50:29 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:32.419001 | orchestrator | 2026-03-05 00:50:32 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:50:32.419926 | orchestrator | 2026-03-05 00:50:32 | INFO  | Task cb9d7483-9f79-4de9-b310-65ba5c75c84c is in state STARTED 2026-03-05 00:50:32.421457 | orchestrator | 2026-03-05 00:50:32 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:50:32.422558 | orchestrator | 2026-03-05 00:50:32 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:50:32.423851 | orchestrator | 2026-03-05 00:50:32 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:50:32.425474 | orchestrator | 2026-03-05 00:50:32 | INFO  | Task 3aad7a31-dd4a-4ee0-ad53-08447840e183 is in state STARTED 2026-03-05 00:50:32.425527 | orchestrator | 2026-03-05 00:50:32 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:35.472312 | orchestrator | 2026-03-05 00:50:35 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:50:35.473305 | orchestrator | 2026-03-05 00:50:35 | INFO  | Task cb9d7483-9f79-4de9-b310-65ba5c75c84c is in state STARTED 2026-03-05 00:50:35.474301 | orchestrator | 2026-03-05 00:50:35 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:50:35.475831 | orchestrator | 2026-03-05 00:50:35 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:50:35.477046 | orchestrator | 2026-03-05 00:50:35 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:50:35.478113 | orchestrator | 2026-03-05 00:50:35 | INFO  | Task 3aad7a31-dd4a-4ee0-ad53-08447840e183 is in state STARTED 2026-03-05 00:50:35.478147 | orchestrator | 2026-03-05 00:50:35 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:38.551369 | orchestrator | 2026-03-05 00:50:38 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:50:38.552961 | orchestrator | 2026-03-05 00:50:38 | INFO  | Task cb9d7483-9f79-4de9-b310-65ba5c75c84c is in state STARTED 2026-03-05 00:50:38.553890 | orchestrator | 2026-03-05 00:50:38 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:50:38.554752 | orchestrator | 2026-03-05 00:50:38 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:50:38.555418 | orchestrator | 2026-03-05 00:50:38 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:50:38.556300 | orchestrator | 2026-03-05 00:50:38 | INFO  | Task 3aad7a31-dd4a-4ee0-ad53-08447840e183 is in state STARTED 2026-03-05 00:50:38.556698 | orchestrator | 2026-03-05 00:50:38 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:41.595845 | orchestrator | 2026-03-05 00:50:41 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:50:41.596509 | orchestrator | 2026-03-05 00:50:41 | INFO  | Task cb9d7483-9f79-4de9-b310-65ba5c75c84c is in state SUCCESS 2026-03-05 00:50:41.598343 | orchestrator | 2026-03-05 00:50:41 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:50:41.599296 | orchestrator | 2026-03-05 00:50:41 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:50:41.600626 | orchestrator | 2026-03-05 00:50:41 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:50:41.602050 | orchestrator | 2026-03-05 00:50:41 | INFO  | Task 3aad7a31-dd4a-4ee0-ad53-08447840e183 is in state STARTED 2026-03-05 00:50:41.602092 | orchestrator | 2026-03-05 00:50:41 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:44.644363 | orchestrator | 2026-03-05 00:50:44 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:50:44.645061 | orchestrator | 2026-03-05 00:50:44 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:50:44.648867 | orchestrator | 2026-03-05 00:50:44 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:50:44.650890 | orchestrator | 2026-03-05 00:50:44 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:50:44.652894 | orchestrator | 2026-03-05 00:50:44 | INFO  | Task 3aad7a31-dd4a-4ee0-ad53-08447840e183 is in state SUCCESS 2026-03-05 00:50:44.652929 | orchestrator | 2026-03-05 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:47.817418 | orchestrator | 2026-03-05 00:50:47 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:50:47.818977 | orchestrator | 2026-03-05 00:50:47 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:50:47.821496 | orchestrator | 2026-03-05 00:50:47 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:50:47.823991 | orchestrator | 2026-03-05 00:50:47 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:50:47.824202 | orchestrator | 2026-03-05 00:50:47 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:50.857545 | orchestrator | 2026-03-05 00:50:50 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:50:50.857611 | orchestrator | 2026-03-05 00:50:50 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:50:50.858437 | orchestrator | 2026-03-05 00:50:50 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:50:50.859121 | orchestrator | 2026-03-05 00:50:50 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:50:50.859182 | orchestrator | 2026-03-05 00:50:50 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:53.900814 | orchestrator | 2026-03-05 00:50:53 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:50:53.901172 | orchestrator | 2026-03-05 00:50:53 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:50:53.903509 | orchestrator | 2026-03-05 00:50:53 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:50:53.904113 | orchestrator | 2026-03-05 00:50:53 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:50:53.904129 | orchestrator | 2026-03-05 00:50:53 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:56.946158 | orchestrator | 2026-03-05 00:50:56 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:50:56.946279 | orchestrator | 2026-03-05 00:50:56 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:50:56.946290 | orchestrator | 2026-03-05 00:50:56 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:50:56.947595 | orchestrator | 2026-03-05 00:50:56 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:50:56.947711 | orchestrator | 2026-03-05 00:50:56 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:59.981894 | orchestrator | 2026-03-05 00:50:59 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:50:59.982588 | orchestrator | 2026-03-05 00:50:59 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:50:59.983106 | orchestrator | 2026-03-05 00:50:59 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:50:59.984268 | orchestrator | 2026-03-05 00:50:59 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:50:59.984301 | orchestrator | 2026-03-05 00:50:59 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:03.028482 | orchestrator | 2026-03-05 00:51:03 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:51:03.031093 | orchestrator | 2026-03-05 00:51:03 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:51:03.032237 | orchestrator | 2026-03-05 00:51:03 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:51:03.033337 | orchestrator | 2026-03-05 00:51:03 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:51:03.033368 | orchestrator | 2026-03-05 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:06.080065 | orchestrator | 2026-03-05 00:51:06 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:51:06.081333 | orchestrator | 2026-03-05 00:51:06 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:51:06.082217 | orchestrator | 2026-03-05 00:51:06 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:51:06.084249 | orchestrator | 2026-03-05 00:51:06 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:51:06.084293 | orchestrator | 2026-03-05 00:51:06 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:09.115878 | orchestrator | 2026-03-05 00:51:09 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:51:09.117009 | orchestrator | 2026-03-05 00:51:09 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:51:09.117746 | orchestrator | 2026-03-05 00:51:09 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:51:09.118751 | orchestrator | 2026-03-05 00:51:09 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:51:09.118778 | orchestrator | 2026-03-05 00:51:09 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:12.147201 | orchestrator | 2026-03-05 00:51:12 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:51:12.147725 | orchestrator | 2026-03-05 00:51:12 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state STARTED 2026-03-05 00:51:12.148402 | orchestrator | 2026-03-05 00:51:12 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:51:12.149053 | orchestrator | 2026-03-05 00:51:12 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:51:12.149090 | orchestrator | 2026-03-05 00:51:12 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:15.186704 | orchestrator | 2026-03-05 00:51:15.186792 | orchestrator | 2026-03-05 00:51:15.186802 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-05 00:51:15.186810 | orchestrator | 2026-03-05 00:51:15.186817 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-05 00:51:15.186825 | orchestrator | Thursday 05 March 2026 00:50:34 +0000 (0:00:00.420) 0:00:00.420 ******** 2026-03-05 00:51:15.186833 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-05 00:51:15.186840 | orchestrator | 2026-03-05 00:51:15.186848 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-05 00:51:15.186856 | orchestrator | Thursday 05 March 2026 00:50:36 +0000 (0:00:01.098) 0:00:01.519 ******** 2026-03-05 00:51:15.186864 | orchestrator | changed: [testbed-manager] 2026-03-05 00:51:15.186872 | orchestrator | 2026-03-05 00:51:15.186880 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-05 00:51:15.186887 | orchestrator | Thursday 05 March 2026 00:50:37 +0000 (0:00:01.762) 0:00:03.281 ******** 2026-03-05 00:51:15.186895 | orchestrator | changed: [testbed-manager] 2026-03-05 00:51:15.186903 | orchestrator | 2026-03-05 00:51:15.186910 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:51:15.186918 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:51:15.186928 | orchestrator | 2026-03-05 00:51:15.186936 | orchestrator | 2026-03-05 00:51:15.186944 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:51:15.186951 | orchestrator | Thursday 05 March 2026 00:50:38 +0000 (0:00:00.903) 0:00:04.185 ******** 2026-03-05 00:51:15.186959 | orchestrator | =============================================================================== 2026-03-05 00:51:15.186966 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.76s 2026-03-05 00:51:15.186974 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.10s 2026-03-05 00:51:15.186998 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.90s 2026-03-05 00:51:15.187007 | orchestrator | 2026-03-05 00:51:15.187014 | orchestrator | 2026-03-05 00:51:15.187022 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-05 00:51:15.187030 | orchestrator | 2026-03-05 00:51:15.187037 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-05 00:51:15.187045 | orchestrator | Thursday 05 March 2026 00:50:33 +0000 (0:00:00.271) 0:00:00.271 ******** 2026-03-05 00:51:15.187052 | orchestrator | ok: [testbed-manager] 2026-03-05 00:51:15.187061 | orchestrator | 2026-03-05 00:51:15.187069 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-05 00:51:15.187076 | orchestrator | Thursday 05 March 2026 00:50:34 +0000 (0:00:00.861) 0:00:01.132 ******** 2026-03-05 00:51:15.187083 | orchestrator | ok: [testbed-manager] 2026-03-05 00:51:15.187091 | orchestrator | 2026-03-05 00:51:15.187147 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-05 00:51:15.187155 | orchestrator | Thursday 05 March 2026 00:50:35 +0000 (0:00:00.960) 0:00:02.093 ******** 2026-03-05 00:51:15.187163 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-05 00:51:15.187170 | orchestrator | 2026-03-05 00:51:15.187179 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-05 00:51:15.187187 | orchestrator | Thursday 05 March 2026 00:50:36 +0000 (0:00:00.872) 0:00:02.965 ******** 2026-03-05 00:51:15.187194 | orchestrator | changed: [testbed-manager] 2026-03-05 00:51:15.187246 | orchestrator | 2026-03-05 00:51:15.187254 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-05 00:51:15.187262 | orchestrator | Thursday 05 March 2026 00:50:38 +0000 (0:00:02.312) 0:00:05.278 ******** 2026-03-05 00:51:15.187290 | orchestrator | changed: [testbed-manager] 2026-03-05 00:51:15.187298 | orchestrator | 2026-03-05 00:51:15.187306 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-05 00:51:15.187314 | orchestrator | Thursday 05 March 2026 00:50:39 +0000 (0:00:00.540) 0:00:05.818 ******** 2026-03-05 00:51:15.187322 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-05 00:51:15.187330 | orchestrator | 2026-03-05 00:51:15.187337 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-05 00:51:15.187345 | orchestrator | Thursday 05 March 2026 00:50:41 +0000 (0:00:01.885) 0:00:07.704 ******** 2026-03-05 00:51:15.187353 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-05 00:51:15.187361 | orchestrator | 2026-03-05 00:51:15.187369 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-05 00:51:15.187376 | orchestrator | Thursday 05 March 2026 00:50:42 +0000 (0:00:01.045) 0:00:08.750 ******** 2026-03-05 00:51:15.187384 | orchestrator | ok: [testbed-manager] 2026-03-05 00:51:15.187392 | orchestrator | 2026-03-05 00:51:15.187400 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-05 00:51:15.187407 | orchestrator | Thursday 05 March 2026 00:50:42 +0000 (0:00:00.449) 0:00:09.200 ******** 2026-03-05 00:51:15.187415 | orchestrator | ok: [testbed-manager] 2026-03-05 00:51:15.187423 | orchestrator | 2026-03-05 00:51:15.187430 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:51:15.187438 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:51:15.187446 | orchestrator | 2026-03-05 00:51:15.187454 | orchestrator | 2026-03-05 00:51:15.187462 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:51:15.187469 | orchestrator | Thursday 05 March 2026 00:50:43 +0000 (0:00:00.414) 0:00:09.614 ******** 2026-03-05 00:51:15.187477 | orchestrator | =============================================================================== 2026-03-05 00:51:15.187485 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.31s 2026-03-05 00:51:15.187493 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.89s 2026-03-05 00:51:15.187500 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.05s 2026-03-05 00:51:15.187523 | orchestrator | Create .kube directory -------------------------------------------------- 0.96s 2026-03-05 00:51:15.187531 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.87s 2026-03-05 00:51:15.187539 | orchestrator | Get home directory of operator user ------------------------------------- 0.86s 2026-03-05 00:51:15.187546 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.54s 2026-03-05 00:51:15.187554 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.45s 2026-03-05 00:51:15.187562 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.41s 2026-03-05 00:51:15.187569 | orchestrator | 2026-03-05 00:51:15.187577 | orchestrator | 2026-03-05 00:51:15 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:51:15.187585 | orchestrator | 2026-03-05 00:51:15 | INFO  | Task b7047ca1-215b-4831-bacf-0f6c1d225628 is in state SUCCESS 2026-03-05 00:51:15.187870 | orchestrator | 2026-03-05 00:51:15.187886 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-05 00:51:15.187893 | orchestrator | 2026-03-05 00:51:15.187900 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-05 00:51:15.187906 | orchestrator | Thursday 05 March 2026 00:48:38 +0000 (0:00:00.075) 0:00:00.075 ******** 2026-03-05 00:51:15.187913 | orchestrator | ok: [localhost] => { 2026-03-05 00:51:15.187921 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-05 00:51:15.187929 | orchestrator | } 2026-03-05 00:51:15.187936 | orchestrator | 2026-03-05 00:51:15.187944 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-05 00:51:15.187959 | orchestrator | Thursday 05 March 2026 00:48:38 +0000 (0:00:00.048) 0:00:00.123 ******** 2026-03-05 00:51:15.187974 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-05 00:51:15.187983 | orchestrator | ...ignoring 2026-03-05 00:51:15.187990 | orchestrator | 2026-03-05 00:51:15.187996 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-05 00:51:15.188003 | orchestrator | Thursday 05 March 2026 00:48:41 +0000 (0:00:03.149) 0:00:03.273 ******** 2026-03-05 00:51:15.188011 | orchestrator | skipping: [localhost] 2026-03-05 00:51:15.188017 | orchestrator | 2026-03-05 00:51:15.188023 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-05 00:51:15.188029 | orchestrator | Thursday 05 March 2026 00:48:41 +0000 (0:00:00.200) 0:00:03.473 ******** 2026-03-05 00:51:15.188036 | orchestrator | ok: [localhost] 2026-03-05 00:51:15.188042 | orchestrator | 2026-03-05 00:51:15.188048 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 00:51:15.188054 | orchestrator | 2026-03-05 00:51:15.188060 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 00:51:15.188067 | orchestrator | Thursday 05 March 2026 00:48:42 +0000 (0:00:00.641) 0:00:04.115 ******** 2026-03-05 00:51:15.188074 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:51:15.188081 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:51:15.188087 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:51:15.188094 | orchestrator | 2026-03-05 00:51:15.188100 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 00:51:15.188106 | orchestrator | Thursday 05 March 2026 00:48:43 +0000 (0:00:00.977) 0:00:05.092 ******** 2026-03-05 00:51:15.188112 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-05 00:51:15.188119 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-05 00:51:15.188125 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-05 00:51:15.188131 | orchestrator | 2026-03-05 00:51:15.188137 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-05 00:51:15.188144 | orchestrator | 2026-03-05 00:51:15.188150 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-05 00:51:15.188156 | orchestrator | Thursday 05 March 2026 00:48:45 +0000 (0:00:02.182) 0:00:07.275 ******** 2026-03-05 00:51:15.188163 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:51:15.188170 | orchestrator | 2026-03-05 00:51:15.188178 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-05 00:51:15.188185 | orchestrator | Thursday 05 March 2026 00:48:46 +0000 (0:00:00.833) 0:00:08.108 ******** 2026-03-05 00:51:15.188191 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:51:15.188198 | orchestrator | 2026-03-05 00:51:15.188205 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-05 00:51:15.188211 | orchestrator | Thursday 05 March 2026 00:48:47 +0000 (0:00:00.896) 0:00:09.005 ******** 2026-03-05 00:51:15.188218 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:51:15.188224 | orchestrator | 2026-03-05 00:51:15.188231 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-05 00:51:15.188238 | orchestrator | Thursday 05 March 2026 00:48:47 +0000 (0:00:00.388) 0:00:09.393 ******** 2026-03-05 00:51:15.188245 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:51:15.188251 | orchestrator | 2026-03-05 00:51:15.188257 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-05 00:51:15.188264 | orchestrator | Thursday 05 March 2026 00:48:47 +0000 (0:00:00.411) 0:00:09.804 ******** 2026-03-05 00:51:15.188272 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:51:15.188278 | orchestrator | 2026-03-05 00:51:15.188286 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-05 00:51:15.188304 | orchestrator | Thursday 05 March 2026 00:48:48 +0000 (0:00:00.611) 0:00:10.416 ******** 2026-03-05 00:51:15.188311 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:51:15.188317 | orchestrator | 2026-03-05 00:51:15.188324 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-05 00:51:15.188332 | orchestrator | Thursday 05 March 2026 00:48:49 +0000 (0:00:01.084) 0:00:11.500 ******** 2026-03-05 00:51:15.188340 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:51:15.188346 | orchestrator | 2026-03-05 00:51:15.188353 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-05 00:51:15.188359 | orchestrator | Thursday 05 March 2026 00:48:51 +0000 (0:00:01.992) 0:00:13.492 ******** 2026-03-05 00:51:15.188365 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:51:15.188372 | orchestrator | 2026-03-05 00:51:15.188378 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-05 00:51:15.188384 | orchestrator | Thursday 05 March 2026 00:48:53 +0000 (0:00:01.857) 0:00:15.350 ******** 2026-03-05 00:51:15.188391 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:51:15.188397 | orchestrator | 2026-03-05 00:51:15.188404 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-05 00:51:15.188412 | orchestrator | Thursday 05 March 2026 00:48:55 +0000 (0:00:01.731) 0:00:17.082 ******** 2026-03-05 00:51:15.188419 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:51:15.188425 | orchestrator | 2026-03-05 00:51:15.188487 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-05 00:51:15.188495 | orchestrator | Thursday 05 March 2026 00:48:55 +0000 (0:00:00.694) 0:00:17.776 ******** 2026-03-05 00:51:15.188513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-05 00:51:15.188525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-05 00:51:15.188534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-05 00:51:15.188550 | orchestrator | 2026-03-05 00:51:15.188558 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-05 00:51:15.188565 | orchestrator | Thursday 05 March 2026 00:48:56 +0000 (0:00:01.104) 0:00:18.880 ******** 2026-03-05 00:51:15.188579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-05 00:51:15.188588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-05 00:51:15.188596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-05 00:51:15.188609 | orchestrator | 2026-03-05 00:51:15.188615 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-05 00:51:15.188641 | orchestrator | Thursday 05 March 2026 00:49:01 +0000 (0:00:04.531) 0:00:23.414 ******** 2026-03-05 00:51:15.188648 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-05 00:51:15.188655 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-05 00:51:15.188662 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-05 00:51:15.188668 | orchestrator | 2026-03-05 00:51:15.188674 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-05 00:51:15.188680 | orchestrator | Thursday 05 March 2026 00:49:05 +0000 (0:00:03.784) 0:00:27.198 ******** 2026-03-05 00:51:15.188687 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-05 00:51:15.188694 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-05 00:51:15.188700 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-05 00:51:15.188706 | orchestrator | 2026-03-05 00:51:15.188712 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-05 00:51:15.188718 | orchestrator | Thursday 05 March 2026 00:49:08 +0000 (0:00:03.606) 0:00:30.805 ******** 2026-03-05 00:51:15.188724 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-05 00:51:15.188730 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-05 00:51:15.188736 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-05 00:51:15.188743 | orchestrator | 2026-03-05 00:51:15.188816 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-05 00:51:15.188837 | orchestrator | Thursday 05 March 2026 00:49:10 +0000 (0:00:01.774) 0:00:32.580 ******** 2026-03-05 00:51:15.188849 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-05 00:51:15.188855 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-05 00:51:15.188861 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-05 00:51:15.188866 | orchestrator | 2026-03-05 00:51:15.188873 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-05 00:51:15.188878 | orchestrator | Thursday 05 March 2026 00:49:12 +0000 (0:00:02.302) 0:00:34.883 ******** 2026-03-05 00:51:15.188884 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-05 00:51:15.188890 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-05 00:51:15.188900 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-05 00:51:15.188906 | orchestrator | 2026-03-05 00:51:15.188913 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-05 00:51:15.188919 | orchestrator | Thursday 05 March 2026 00:49:15 +0000 (0:00:02.355) 0:00:37.238 ******** 2026-03-05 00:51:15.188925 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-05 00:51:15.188932 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-05 00:51:15.188938 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-05 00:51:15.188951 | orchestrator | 2026-03-05 00:51:15.188957 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-05 00:51:15.188964 | orchestrator | Thursday 05 March 2026 00:49:18 +0000 (0:00:02.914) 0:00:40.152 ******** 2026-03-05 00:51:15.188969 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:51:15.188975 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:51:15.188981 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:51:15.188987 | orchestrator | 2026-03-05 00:51:15.188993 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-05 00:51:15.188999 | orchestrator | Thursday 05 March 2026 00:49:18 +0000 (0:00:00.529) 0:00:40.682 ******** 2026-03-05 00:51:15.189006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-05 00:51:15.189014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-05 00:51:15.189031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-05 00:51:15.189038 | orchestrator | 2026-03-05 00:51:15.189044 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-05 00:51:15.189057 | orchestrator | Thursday 05 March 2026 00:49:20 +0000 (0:00:01.417) 0:00:42.099 ******** 2026-03-05 00:51:15.189063 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:51:15.189069 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:51:15.189075 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:51:15.189081 | orchestrator | 2026-03-05 00:51:15.189088 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-05 00:51:15.189094 | orchestrator | Thursday 05 March 2026 00:49:21 +0000 (0:00:00.990) 0:00:43.089 ******** 2026-03-05 00:51:15.189100 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:51:15.189106 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:51:15.189112 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:51:15.189118 | orchestrator | 2026-03-05 00:51:15.189124 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-05 00:51:15.189129 | orchestrator | Thursday 05 March 2026 00:49:28 +0000 (0:00:07.639) 0:00:50.729 ******** 2026-03-05 00:51:15.189136 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:51:15.189142 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:51:15.189148 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:51:15.189155 | orchestrator | 2026-03-05 00:51:15.189161 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-05 00:51:15.189167 | orchestrator | 2026-03-05 00:51:15.189174 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-05 00:51:15.189180 | orchestrator | Thursday 05 March 2026 00:49:29 +0000 (0:00:00.793) 0:00:51.523 ******** 2026-03-05 00:51:15.189186 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:51:15.189192 | orchestrator | 2026-03-05 00:51:15.189199 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-05 00:51:15.189205 | orchestrator | Thursday 05 March 2026 00:49:30 +0000 (0:00:00.719) 0:00:52.243 ******** 2026-03-05 00:51:15.189212 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:51:15.189218 | orchestrator | 2026-03-05 00:51:15.189225 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-05 00:51:15.189232 | orchestrator | Thursday 05 March 2026 00:49:30 +0000 (0:00:00.503) 0:00:52.747 ******** 2026-03-05 00:51:15.189238 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:51:15.189245 | orchestrator | 2026-03-05 00:51:15.189252 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-05 00:51:15.189258 | orchestrator | Thursday 05 March 2026 00:49:32 +0000 (0:00:02.058) 0:00:54.805 ******** 2026-03-05 00:51:15.189264 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:51:15.189270 | orchestrator | 2026-03-05 00:51:15.189276 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-05 00:51:15.189282 | orchestrator | 2026-03-05 00:51:15.189289 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-05 00:51:15.189296 | orchestrator | Thursday 05 March 2026 00:50:32 +0000 (0:00:59.581) 0:01:54.387 ******** 2026-03-05 00:51:15.189303 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:51:15.189309 | orchestrator | 2026-03-05 00:51:15.189316 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-05 00:51:15.189323 | orchestrator | Thursday 05 March 2026 00:50:33 +0000 (0:00:00.748) 0:01:55.136 ******** 2026-03-05 00:51:15.189329 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:51:15.189336 | orchestrator | 2026-03-05 00:51:15.189342 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-05 00:51:15.189349 | orchestrator | Thursday 05 March 2026 00:50:33 +0000 (0:00:00.446) 0:01:55.582 ******** 2026-03-05 00:51:15.189355 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:51:15.189361 | orchestrator | 2026-03-05 00:51:15.189368 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-05 00:51:15.189374 | orchestrator | Thursday 05 March 2026 00:50:40 +0000 (0:00:07.319) 0:02:02.901 ******** 2026-03-05 00:51:15.189380 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:51:15.189386 | orchestrator | 2026-03-05 00:51:15.189392 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-05 00:51:15.189405 | orchestrator | 2026-03-05 00:51:15.189411 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-05 00:51:15.189418 | orchestrator | Thursday 05 March 2026 00:50:52 +0000 (0:00:11.816) 0:02:14.718 ******** 2026-03-05 00:51:15.189425 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:51:15.189431 | orchestrator | 2026-03-05 00:51:15.189438 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-05 00:51:15.189443 | orchestrator | Thursday 05 March 2026 00:50:53 +0000 (0:00:00.637) 0:02:15.356 ******** 2026-03-05 00:51:15.189449 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:51:15.189455 | orchestrator | 2026-03-05 00:51:15.189461 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-05 00:51:15.189467 | orchestrator | Thursday 05 March 2026 00:50:53 +0000 (0:00:00.268) 0:02:15.624 ******** 2026-03-05 00:51:15.189473 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:51:15.189479 | orchestrator | 2026-03-05 00:51:15.189485 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-05 00:51:15.189497 | orchestrator | Thursday 05 March 2026 00:50:55 +0000 (0:00:01.709) 0:02:17.333 ******** 2026-03-05 00:51:15.189502 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:51:15.189509 | orchestrator | 2026-03-05 00:51:15.189516 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-05 00:51:15.189522 | orchestrator | 2026-03-05 00:51:15.189528 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-05 00:51:15.189535 | orchestrator | Thursday 05 March 2026 00:51:09 +0000 (0:00:14.631) 0:02:31.965 ******** 2026-03-05 00:51:15.189542 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:51:15.189548 | orchestrator | 2026-03-05 00:51:15.189555 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-05 00:51:15.189562 | orchestrator | Thursday 05 March 2026 00:51:10 +0000 (0:00:00.499) 0:02:32.464 ******** 2026-03-05 00:51:15.189568 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:51:15.189575 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:51:15.189581 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:51:15.189593 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-05 00:51:15.189600 | orchestrator | enable_outward_rabbitmq_True 2026-03-05 00:51:15.189606 | orchestrator | 2026-03-05 00:51:15.189613 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-05 00:51:15.189639 | orchestrator | skipping: no hosts matched 2026-03-05 00:51:15.189646 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-05 00:51:15.189654 | orchestrator | outward_rabbitmq_restart 2026-03-05 00:51:15.189661 | orchestrator | 2026-03-05 00:51:15.189668 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-05 00:51:15.189676 | orchestrator | skipping: no hosts matched 2026-03-05 00:51:15.189683 | orchestrator | 2026-03-05 00:51:15.189691 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-05 00:51:15.189698 | orchestrator | skipping: no hosts matched 2026-03-05 00:51:15.189705 | orchestrator | 2026-03-05 00:51:15.189713 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:51:15.189721 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-05 00:51:15.189730 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-05 00:51:15.189737 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:51:15.189745 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:51:15.189759 | orchestrator | 2026-03-05 00:51:15.189767 | orchestrator | 2026-03-05 00:51:15.189774 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:51:15.189781 | orchestrator | Thursday 05 March 2026 00:51:13 +0000 (0:00:02.532) 0:02:34.997 ******** 2026-03-05 00:51:15.189788 | orchestrator | =============================================================================== 2026-03-05 00:51:15.189794 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 86.03s 2026-03-05 00:51:15.189800 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 11.09s 2026-03-05 00:51:15.189807 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.64s 2026-03-05 00:51:15.189814 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 4.53s 2026-03-05 00:51:15.189821 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 3.78s 2026-03-05 00:51:15.189827 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.61s 2026-03-05 00:51:15.189833 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.15s 2026-03-05 00:51:15.189840 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.91s 2026-03-05 00:51:15.189847 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.53s 2026-03-05 00:51:15.189852 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.36s 2026-03-05 00:51:15.189858 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.30s 2026-03-05 00:51:15.189865 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.18s 2026-03-05 00:51:15.189871 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.11s 2026-03-05 00:51:15.189878 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.99s 2026-03-05 00:51:15.189885 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.86s 2026-03-05 00:51:15.189892 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.77s 2026-03-05 00:51:15.189898 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 1.73s 2026-03-05 00:51:15.189905 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.42s 2026-03-05 00:51:15.189911 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.22s 2026-03-05 00:51:15.189918 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.10s 2026-03-05 00:51:15.189925 | orchestrator | 2026-03-05 00:51:15 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:51:15.192004 | orchestrator | 2026-03-05 00:51:15 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:51:15.192054 | orchestrator | 2026-03-05 00:51:15 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:18.235433 | orchestrator | 2026-03-05 00:51:18 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:51:18.236724 | orchestrator | 2026-03-05 00:51:18 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:51:18.237924 | orchestrator | 2026-03-05 00:51:18 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:51:18.237967 | orchestrator | 2026-03-05 00:51:18 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:21.277317 | orchestrator | 2026-03-05 00:51:21 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:51:21.278301 | orchestrator | 2026-03-05 00:51:21 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:51:21.279057 | orchestrator | 2026-03-05 00:51:21 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:51:21.279073 | orchestrator | 2026-03-05 00:51:21 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:24.311019 | orchestrator | 2026-03-05 00:51:24 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:51:24.311114 | orchestrator | 2026-03-05 00:51:24 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:51:24.312238 | orchestrator | 2026-03-05 00:51:24 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:51:24.312286 | orchestrator | 2026-03-05 00:51:24 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:27.395309 | orchestrator | 2026-03-05 00:51:27 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:51:27.398081 | orchestrator | 2026-03-05 00:51:27 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:51:27.398868 | orchestrator | 2026-03-05 00:51:27 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:51:27.398922 | orchestrator | 2026-03-05 00:51:27 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:30.441769 | orchestrator | 2026-03-05 00:51:30 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:51:30.444241 | orchestrator | 2026-03-05 00:51:30 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:51:30.445471 | orchestrator | 2026-03-05 00:51:30 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:51:30.445519 | orchestrator | 2026-03-05 00:51:30 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:33.505232 | orchestrator | 2026-03-05 00:51:33 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:51:33.510679 | orchestrator | 2026-03-05 00:51:33 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:51:33.516170 | orchestrator | 2026-03-05 00:51:33 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:51:33.516239 | orchestrator | 2026-03-05 00:51:33 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:36.550158 | orchestrator | 2026-03-05 00:51:36 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:51:36.551728 | orchestrator | 2026-03-05 00:51:36 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:51:36.553037 | orchestrator | 2026-03-05 00:51:36 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:51:36.553098 | orchestrator | 2026-03-05 00:51:36 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:39.596458 | orchestrator | 2026-03-05 00:51:39 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:51:39.598940 | orchestrator | 2026-03-05 00:51:39 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:51:39.599890 | orchestrator | 2026-03-05 00:51:39 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:51:39.599934 | orchestrator | 2026-03-05 00:51:39 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:42.640753 | orchestrator | 2026-03-05 00:51:42 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:51:42.642993 | orchestrator | 2026-03-05 00:51:42 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:51:42.645303 | orchestrator | 2026-03-05 00:51:42 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:51:42.645684 | orchestrator | 2026-03-05 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:45.690843 | orchestrator | 2026-03-05 00:51:45 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:51:45.693968 | orchestrator | 2026-03-05 00:51:45 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:51:45.695437 | orchestrator | 2026-03-05 00:51:45 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:51:45.695529 | orchestrator | 2026-03-05 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:48.745433 | orchestrator | 2026-03-05 00:51:48 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:51:48.747978 | orchestrator | 2026-03-05 00:51:48 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:51:48.749252 | orchestrator | 2026-03-05 00:51:48 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:51:48.749299 | orchestrator | 2026-03-05 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:51.788344 | orchestrator | 2026-03-05 00:51:51 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:51:51.790463 | orchestrator | 2026-03-05 00:51:51 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:51:51.792677 | orchestrator | 2026-03-05 00:51:51 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:51:51.794717 | orchestrator | 2026-03-05 00:51:51 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:54.831537 | orchestrator | 2026-03-05 00:51:54 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:51:54.831838 | orchestrator | 2026-03-05 00:51:54 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:51:54.835989 | orchestrator | 2026-03-05 00:51:54 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:51:54.836111 | orchestrator | 2026-03-05 00:51:54 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:57.882080 | orchestrator | 2026-03-05 00:51:57 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:51:57.884703 | orchestrator | 2026-03-05 00:51:57 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:51:57.886795 | orchestrator | 2026-03-05 00:51:57 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:51:57.886850 | orchestrator | 2026-03-05 00:51:57 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:00.939197 | orchestrator | 2026-03-05 00:52:00 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:52:00.941130 | orchestrator | 2026-03-05 00:52:00 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:52:00.941191 | orchestrator | 2026-03-05 00:52:00 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:52:00.941200 | orchestrator | 2026-03-05 00:52:00 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:03.983046 | orchestrator | 2026-03-05 00:52:03 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:52:03.983214 | orchestrator | 2026-03-05 00:52:03 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:52:03.984139 | orchestrator | 2026-03-05 00:52:03 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:52:03.984170 | orchestrator | 2026-03-05 00:52:03 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:07.030942 | orchestrator | 2026-03-05 00:52:07 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:52:07.032053 | orchestrator | 2026-03-05 00:52:07 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:52:07.034402 | orchestrator | 2026-03-05 00:52:07 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:52:07.034753 | orchestrator | 2026-03-05 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:10.074234 | orchestrator | 2026-03-05 00:52:10 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state STARTED 2026-03-05 00:52:10.075679 | orchestrator | 2026-03-05 00:52:10 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:52:10.077365 | orchestrator | 2026-03-05 00:52:10 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:52:10.077563 | orchestrator | 2026-03-05 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:13.120580 | orchestrator | 2026-03-05 00:52:13 | INFO  | Task f165b63a-ca83-496f-ab98-9850d77450b6 is in state SUCCESS 2026-03-05 00:52:13.122575 | orchestrator | 2026-03-05 00:52:13.122809 | orchestrator | 2026-03-05 00:52:13.123150 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 00:52:13.123180 | orchestrator | 2026-03-05 00:52:13.123196 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 00:52:13.123243 | orchestrator | Thursday 05 March 2026 00:49:32 +0000 (0:00:00.421) 0:00:00.421 ******** 2026-03-05 00:52:13.123255 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:52:13.123267 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:52:13.123277 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:52:13.123287 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:52:13.123296 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:52:13.123305 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:52:13.123313 | orchestrator | 2026-03-05 00:52:13.123338 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 00:52:13.123347 | orchestrator | Thursday 05 March 2026 00:49:33 +0000 (0:00:01.250) 0:00:01.672 ******** 2026-03-05 00:52:13.123356 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-05 00:52:13.123365 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-05 00:52:13.123374 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-05 00:52:13.123383 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-05 00:52:13.123391 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-05 00:52:13.123400 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-05 00:52:13.123408 | orchestrator | 2026-03-05 00:52:13.123417 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-05 00:52:13.123425 | orchestrator | 2026-03-05 00:52:13.123434 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-05 00:52:13.123443 | orchestrator | Thursday 05 March 2026 00:49:35 +0000 (0:00:01.581) 0:00:03.254 ******** 2026-03-05 00:52:13.123453 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:52:13.123463 | orchestrator | 2026-03-05 00:52:13.123472 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-05 00:52:13.123485 | orchestrator | Thursday 05 March 2026 00:49:36 +0000 (0:00:01.580) 0:00:04.834 ******** 2026-03-05 00:52:13.123499 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.123510 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.123543 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.123552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.123561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.123570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.123580 | orchestrator | 2026-03-05 00:52:13.123608 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-05 00:52:13.123617 | orchestrator | Thursday 05 March 2026 00:49:37 +0000 (0:00:01.326) 0:00:06.160 ******** 2026-03-05 00:52:13.123632 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.123642 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.123743 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.123756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.123781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.123805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.123821 | orchestrator | 2026-03-05 00:52:13.123835 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-05 00:52:13.123849 | orchestrator | Thursday 05 March 2026 00:49:39 +0000 (0:00:01.756) 0:00:07.917 ******** 2026-03-05 00:52:13.123863 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.123877 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.123904 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.123926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.123940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.123954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.123978 | orchestrator | 2026-03-05 00:52:13.123994 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-05 00:52:13.124009 | orchestrator | Thursday 05 March 2026 00:49:41 +0000 (0:00:01.744) 0:00:09.662 ******** 2026-03-05 00:52:13.124023 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.124038 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.124053 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.124068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.124083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.124099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.124111 | orchestrator | 2026-03-05 00:52:13.124126 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-05 00:52:13.124135 | orchestrator | Thursday 05 March 2026 00:49:42 +0000 (0:00:01.557) 0:00:11.219 ******** 2026-03-05 00:52:13.124150 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.124161 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.124241 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.124258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.124275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.124289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.124304 | orchestrator | 2026-03-05 00:52:13.124317 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-05 00:52:13.124330 | orchestrator | Thursday 05 March 2026 00:49:44 +0000 (0:00:01.655) 0:00:12.874 ******** 2026-03-05 00:52:13.124343 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:52:13.124356 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:52:13.124363 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:52:13.124371 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:52:13.124379 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:52:13.124386 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:52:13.124394 | orchestrator | 2026-03-05 00:52:13.124402 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-05 00:52:13.124410 | orchestrator | Thursday 05 March 2026 00:49:48 +0000 (0:00:03.835) 0:00:16.710 ******** 2026-03-05 00:52:13.124417 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-05 00:52:13.124426 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-05 00:52:13.124434 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-05 00:52:13.124441 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-05 00:52:13.124449 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-05 00:52:13.124457 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-05 00:52:13.124465 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-05 00:52:13.124477 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-05 00:52:13.124498 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-05 00:52:13.124511 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-05 00:52:13.124523 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-05 00:52:13.124543 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-05 00:52:13.124552 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-05 00:52:13.124610 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-05 00:52:13.124619 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-05 00:52:13.124627 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-05 00:52:13.124635 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-05 00:52:13.124642 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-05 00:52:13.124676 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-05 00:52:13.124687 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-05 00:52:13.124733 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-05 00:52:13.124742 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-05 00:52:13.124750 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-05 00:52:13.124758 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-05 00:52:13.124765 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-05 00:52:13.124773 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-05 00:52:13.124781 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-05 00:52:13.124788 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-05 00:52:13.124796 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-05 00:52:13.124804 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-05 00:52:13.124812 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-05 00:52:13.124819 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-05 00:52:13.124827 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-05 00:52:13.124835 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-05 00:52:13.124843 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-05 00:52:13.124850 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-05 00:52:13.124859 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-05 00:52:13.124866 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-05 00:52:13.124874 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-05 00:52:13.124882 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-05 00:52:13.124896 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-05 00:52:13.124904 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-05 00:52:13.124913 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-05 00:52:13.124920 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-05 00:52:13.124935 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-05 00:52:13.124943 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-05 00:52:13.124951 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-05 00:52:13.124958 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-05 00:52:13.124971 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-05 00:52:13.124979 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-05 00:52:13.124986 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-05 00:52:13.124994 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-05 00:52:13.125002 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-05 00:52:13.125010 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-05 00:52:13.125017 | orchestrator | 2026-03-05 00:52:13.125025 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-05 00:52:13.125033 | orchestrator | Thursday 05 March 2026 00:50:12 +0000 (0:00:24.005) 0:00:40.716 ******** 2026-03-05 00:52:13.125041 | orchestrator | 2026-03-05 00:52:13.125049 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-05 00:52:13.125056 | orchestrator | Thursday 05 March 2026 00:50:12 +0000 (0:00:00.191) 0:00:40.907 ******** 2026-03-05 00:52:13.125064 | orchestrator | 2026-03-05 00:52:13.125072 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-05 00:52:13.125079 | orchestrator | Thursday 05 March 2026 00:50:12 +0000 (0:00:00.128) 0:00:41.036 ******** 2026-03-05 00:52:13.125087 | orchestrator | 2026-03-05 00:52:13.125095 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-05 00:52:13.125102 | orchestrator | Thursday 05 March 2026 00:50:12 +0000 (0:00:00.109) 0:00:41.146 ******** 2026-03-05 00:52:13.125110 | orchestrator | 2026-03-05 00:52:13.125118 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-05 00:52:13.125125 | orchestrator | Thursday 05 March 2026 00:50:13 +0000 (0:00:00.218) 0:00:41.364 ******** 2026-03-05 00:52:13.125133 | orchestrator | 2026-03-05 00:52:13.125141 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-05 00:52:13.125148 | orchestrator | Thursday 05 March 2026 00:50:13 +0000 (0:00:00.175) 0:00:41.540 ******** 2026-03-05 00:52:13.125156 | orchestrator | 2026-03-05 00:52:13.125164 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-05 00:52:13.125171 | orchestrator | Thursday 05 March 2026 00:50:13 +0000 (0:00:00.086) 0:00:41.627 ******** 2026-03-05 00:52:13.125179 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:52:13.125204 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:52:13.125212 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:52:13.125220 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:52:13.125227 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:52:13.125235 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:52:13.125243 | orchestrator | 2026-03-05 00:52:13.125250 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-05 00:52:13.125258 | orchestrator | Thursday 05 March 2026 00:50:16 +0000 (0:00:02.802) 0:00:44.430 ******** 2026-03-05 00:52:13.125266 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:52:13.125273 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:52:13.125281 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:52:13.125289 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:52:13.125296 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:52:13.125304 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:52:13.125312 | orchestrator | 2026-03-05 00:52:13.125320 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-05 00:52:13.125327 | orchestrator | 2026-03-05 00:52:13.125335 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-05 00:52:13.125343 | orchestrator | Thursday 05 March 2026 00:50:48 +0000 (0:00:32.688) 0:01:17.118 ******** 2026-03-05 00:52:13.125350 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:52:13.125358 | orchestrator | 2026-03-05 00:52:13.125366 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-05 00:52:13.125373 | orchestrator | Thursday 05 March 2026 00:50:49 +0000 (0:00:00.858) 0:01:17.977 ******** 2026-03-05 00:52:13.125381 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:52:13.125389 | orchestrator | 2026-03-05 00:52:13.125397 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-05 00:52:13.125404 | orchestrator | Thursday 05 March 2026 00:50:50 +0000 (0:00:00.602) 0:01:18.579 ******** 2026-03-05 00:52:13.125412 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:52:13.125420 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:52:13.125427 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:52:13.125435 | orchestrator | 2026-03-05 00:52:13.125443 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-05 00:52:13.125450 | orchestrator | Thursday 05 March 2026 00:50:51 +0000 (0:00:01.136) 0:01:19.716 ******** 2026-03-05 00:52:13.125458 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:52:13.125466 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:52:13.125473 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:52:13.125486 | orchestrator | 2026-03-05 00:52:13.125494 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-05 00:52:13.125502 | orchestrator | Thursday 05 March 2026 00:50:51 +0000 (0:00:00.338) 0:01:20.054 ******** 2026-03-05 00:52:13.125509 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:52:13.125517 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:52:13.125525 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:52:13.125532 | orchestrator | 2026-03-05 00:52:13.125540 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-05 00:52:13.125548 | orchestrator | Thursday 05 March 2026 00:50:52 +0000 (0:00:00.357) 0:01:20.412 ******** 2026-03-05 00:52:13.125556 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:52:13.125564 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:52:13.125575 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:52:13.125583 | orchestrator | 2026-03-05 00:52:13.125591 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-05 00:52:13.125598 | orchestrator | Thursday 05 March 2026 00:50:52 +0000 (0:00:00.391) 0:01:20.803 ******** 2026-03-05 00:52:13.125606 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:52:13.125614 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:52:13.125621 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:52:13.125629 | orchestrator | 2026-03-05 00:52:13.125647 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-05 00:52:13.125675 | orchestrator | Thursday 05 March 2026 00:50:53 +0000 (0:00:00.608) 0:01:21.412 ******** 2026-03-05 00:52:13.125683 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:52:13.125691 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:52:13.125698 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:52:13.125706 | orchestrator | 2026-03-05 00:52:13.125714 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-05 00:52:13.125721 | orchestrator | Thursday 05 March 2026 00:50:53 +0000 (0:00:00.374) 0:01:21.787 ******** 2026-03-05 00:52:13.125729 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:52:13.125737 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:52:13.125744 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:52:13.125752 | orchestrator | 2026-03-05 00:52:13.125760 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-05 00:52:13.125768 | orchestrator | Thursday 05 March 2026 00:50:53 +0000 (0:00:00.369) 0:01:22.157 ******** 2026-03-05 00:52:13.125775 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:52:13.125783 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:52:13.125791 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:52:13.125798 | orchestrator | 2026-03-05 00:52:13.125806 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-05 00:52:13.125814 | orchestrator | Thursday 05 March 2026 00:50:54 +0000 (0:00:00.328) 0:01:22.485 ******** 2026-03-05 00:52:13.125821 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:52:13.125829 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:52:13.125837 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:52:13.125844 | orchestrator | 2026-03-05 00:52:13.125852 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-05 00:52:13.125860 | orchestrator | Thursday 05 March 2026 00:50:54 +0000 (0:00:00.593) 0:01:23.079 ******** 2026-03-05 00:52:13.125867 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:52:13.125875 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:52:13.125882 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:52:13.125890 | orchestrator | 2026-03-05 00:52:13.125898 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-05 00:52:13.125906 | orchestrator | Thursday 05 March 2026 00:50:55 +0000 (0:00:00.391) 0:01:23.470 ******** 2026-03-05 00:52:13.125913 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:52:13.125921 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:52:13.125929 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:52:13.125937 | orchestrator | 2026-03-05 00:52:13.125944 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-05 00:52:13.125952 | orchestrator | Thursday 05 March 2026 00:50:55 +0000 (0:00:00.440) 0:01:23.910 ******** 2026-03-05 00:52:13.125960 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:52:13.125967 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:52:13.125975 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:52:13.125982 | orchestrator | 2026-03-05 00:52:13.125990 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-05 00:52:13.125998 | orchestrator | Thursday 05 March 2026 00:50:56 +0000 (0:00:00.412) 0:01:24.323 ******** 2026-03-05 00:52:13.126006 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:52:13.126014 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:52:13.126326 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:52:13.126335 | orchestrator | 2026-03-05 00:52:13.126343 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-05 00:52:13.126351 | orchestrator | Thursday 05 March 2026 00:50:56 +0000 (0:00:00.583) 0:01:24.906 ******** 2026-03-05 00:52:13.126359 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:52:13.126367 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:52:13.126375 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:52:13.126382 | orchestrator | 2026-03-05 00:52:13.126390 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-05 00:52:13.126406 | orchestrator | Thursday 05 March 2026 00:50:57 +0000 (0:00:00.489) 0:01:25.395 ******** 2026-03-05 00:52:13.126414 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:52:13.126421 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:52:13.126429 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:52:13.126437 | orchestrator | 2026-03-05 00:52:13.126445 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-05 00:52:13.126453 | orchestrator | Thursday 05 March 2026 00:50:57 +0000 (0:00:00.405) 0:01:25.801 ******** 2026-03-05 00:52:13.126460 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:52:13.126468 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:52:13.126476 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:52:13.126483 | orchestrator | 2026-03-05 00:52:13.126491 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-05 00:52:13.126499 | orchestrator | Thursday 05 March 2026 00:50:57 +0000 (0:00:00.383) 0:01:26.185 ******** 2026-03-05 00:52:13.126507 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:52:13.126515 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:52:13.126528 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:52:13.126536 | orchestrator | 2026-03-05 00:52:13.126544 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-05 00:52:13.126552 | orchestrator | Thursday 05 March 2026 00:50:58 +0000 (0:00:00.371) 0:01:26.556 ******** 2026-03-05 00:52:13.126560 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:52:13.126568 | orchestrator | 2026-03-05 00:52:13.126576 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-05 00:52:13.126584 | orchestrator | Thursday 05 March 2026 00:50:59 +0000 (0:00:00.974) 0:01:27.531 ******** 2026-03-05 00:52:13.126592 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:52:13.126605 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:52:13.126613 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:52:13.126621 | orchestrator | 2026-03-05 00:52:13.126628 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-05 00:52:13.126636 | orchestrator | Thursday 05 March 2026 00:50:59 +0000 (0:00:00.603) 0:01:28.134 ******** 2026-03-05 00:52:13.126644 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:52:13.126673 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:52:13.126686 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:52:13.126699 | orchestrator | 2026-03-05 00:52:13.126712 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-05 00:52:13.126725 | orchestrator | Thursday 05 March 2026 00:51:00 +0000 (0:00:00.520) 0:01:28.655 ******** 2026-03-05 00:52:13.126739 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:52:13.126751 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:52:13.126765 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:52:13.126773 | orchestrator | 2026-03-05 00:52:13.126781 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-05 00:52:13.126789 | orchestrator | Thursday 05 March 2026 00:51:01 +0000 (0:00:00.671) 0:01:29.326 ******** 2026-03-05 00:52:13.126797 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:52:13.126805 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:52:13.126813 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:52:13.126820 | orchestrator | 2026-03-05 00:52:13.126828 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-05 00:52:13.126837 | orchestrator | Thursday 05 March 2026 00:51:01 +0000 (0:00:00.404) 0:01:29.731 ******** 2026-03-05 00:52:13.126844 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:52:13.126852 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:52:13.126860 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:52:13.126868 | orchestrator | 2026-03-05 00:52:13.126876 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-05 00:52:13.126883 | orchestrator | Thursday 05 March 2026 00:51:01 +0000 (0:00:00.417) 0:01:30.148 ******** 2026-03-05 00:52:13.126898 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:52:13.126905 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:52:13.126913 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:52:13.126921 | orchestrator | 2026-03-05 00:52:13.126929 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-05 00:52:13.126939 | orchestrator | Thursday 05 March 2026 00:51:02 +0000 (0:00:00.530) 0:01:30.679 ******** 2026-03-05 00:52:13.126948 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:52:13.126957 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:52:13.126966 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:52:13.126975 | orchestrator | 2026-03-05 00:52:13.126984 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-05 00:52:13.126993 | orchestrator | Thursday 05 March 2026 00:51:03 +0000 (0:00:00.689) 0:01:31.368 ******** 2026-03-05 00:52:13.127002 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:52:13.127011 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:52:13.127019 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:52:13.127028 | orchestrator | 2026-03-05 00:52:13.127037 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-05 00:52:13.127046 | orchestrator | Thursday 05 March 2026 00:51:03 +0000 (0:00:00.641) 0:01:32.010 ******** 2026-03-05 00:52:13.127057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127215 | orchestrator | 2026-03-05 00:52:13.127225 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-05 00:52:13.127234 | orchestrator | Thursday 05 March 2026 00:51:05 +0000 (0:00:01.821) 0:01:33.832 ******** 2026-03-05 00:52:13.127244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127336 | orchestrator | 2026-03-05 00:52:13.127344 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-05 00:52:13.127352 | orchestrator | Thursday 05 March 2026 00:51:10 +0000 (0:00:04.916) 0:01:38.749 ******** 2026-03-05 00:52:13.127360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.127587 | orchestrator | 2026-03-05 00:52:13.127601 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-05 00:52:13.127614 | orchestrator | Thursday 05 March 2026 00:51:12 +0000 (0:00:02.325) 0:01:41.074 ******** 2026-03-05 00:52:13.127628 | orchestrator | 2026-03-05 00:52:13.127638 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-05 00:52:13.127646 | orchestrator | Thursday 05 March 2026 00:51:12 +0000 (0:00:00.072) 0:01:41.147 ******** 2026-03-05 00:52:13.127689 | orchestrator | 2026-03-05 00:52:13.127697 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-05 00:52:13.127705 | orchestrator | Thursday 05 March 2026 00:51:12 +0000 (0:00:00.068) 0:01:41.216 ******** 2026-03-05 00:52:13.127712 | orchestrator | 2026-03-05 00:52:13.127720 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-05 00:52:13.127728 | orchestrator | Thursday 05 March 2026 00:51:13 +0000 (0:00:00.064) 0:01:41.280 ******** 2026-03-05 00:52:13.127736 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:52:13.127744 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:52:13.127752 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:52:13.127760 | orchestrator | 2026-03-05 00:52:13.127768 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-05 00:52:13.127775 | orchestrator | Thursday 05 March 2026 00:51:15 +0000 (0:00:02.813) 0:01:44.094 ******** 2026-03-05 00:52:13.127783 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:52:13.127791 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:52:13.127799 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:52:13.127806 | orchestrator | 2026-03-05 00:52:13.127814 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-05 00:52:13.127822 | orchestrator | Thursday 05 March 2026 00:51:22 +0000 (0:00:07.005) 0:01:51.100 ******** 2026-03-05 00:52:13.127830 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:52:13.127837 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:52:13.127845 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:52:13.127853 | orchestrator | 2026-03-05 00:52:13.127861 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-05 00:52:13.127869 | orchestrator | Thursday 05 March 2026 00:51:29 +0000 (0:00:06.922) 0:01:58.022 ******** 2026-03-05 00:52:13.127876 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:52:13.127884 | orchestrator | 2026-03-05 00:52:13.127892 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-05 00:52:13.127900 | orchestrator | Thursday 05 March 2026 00:51:29 +0000 (0:00:00.155) 0:01:58.178 ******** 2026-03-05 00:52:13.127908 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:52:13.127916 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:52:13.127923 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:52:13.127931 | orchestrator | 2026-03-05 00:52:13.127939 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-05 00:52:13.127947 | orchestrator | Thursday 05 March 2026 00:51:30 +0000 (0:00:00.795) 0:01:58.973 ******** 2026-03-05 00:52:13.127961 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:52:13.127969 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:52:13.127977 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:52:13.127985 | orchestrator | 2026-03-05 00:52:13.127993 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-05 00:52:13.128000 | orchestrator | Thursday 05 March 2026 00:51:31 +0000 (0:00:00.803) 0:01:59.777 ******** 2026-03-05 00:52:13.128008 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:52:13.128016 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:52:13.128024 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:52:13.128031 | orchestrator | 2026-03-05 00:52:13.128039 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-05 00:52:13.128047 | orchestrator | Thursday 05 March 2026 00:51:32 +0000 (0:00:00.954) 0:02:00.731 ******** 2026-03-05 00:52:13.128055 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:52:13.128063 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:52:13.128071 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:52:13.128078 | orchestrator | 2026-03-05 00:52:13.128086 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-05 00:52:13.128094 | orchestrator | Thursday 05 March 2026 00:51:33 +0000 (0:00:00.932) 0:02:01.663 ******** 2026-03-05 00:52:13.128102 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:52:13.128110 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:52:13.128123 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:52:13.128131 | orchestrator | 2026-03-05 00:52:13.128139 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-05 00:52:13.128147 | orchestrator | Thursday 05 March 2026 00:51:34 +0000 (0:00:00.981) 0:02:02.645 ******** 2026-03-05 00:52:13.128155 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:52:13.128163 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:52:13.128170 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:52:13.128178 | orchestrator | 2026-03-05 00:52:13.128186 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-05 00:52:13.128194 | orchestrator | Thursday 05 March 2026 00:51:35 +0000 (0:00:00.809) 0:02:03.455 ******** 2026-03-05 00:52:13.128202 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:52:13.128215 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:52:13.128223 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:52:13.128231 | orchestrator | 2026-03-05 00:52:13.128238 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-05 00:52:13.128246 | orchestrator | Thursday 05 March 2026 00:51:35 +0000 (0:00:00.310) 0:02:03.765 ******** 2026-03-05 00:52:13.128254 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128263 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128271 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128279 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128295 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128303 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128311 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128319 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128333 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128341 | orchestrator | 2026-03-05 00:52:13.128349 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-05 00:52:13.128357 | orchestrator | Thursday 05 March 2026 00:51:36 +0000 (0:00:01.398) 0:02:05.164 ******** 2026-03-05 00:52:13.128369 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128377 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128385 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128393 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128417 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128449 | orchestrator | 2026-03-05 00:52:13.128457 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-05 00:52:13.128464 | orchestrator | Thursday 05 March 2026 00:51:41 +0000 (0:00:04.836) 0:02:10.001 ******** 2026-03-05 00:52:13.128478 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128495 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128503 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128533 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128558 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:52:13.128566 | orchestrator | 2026-03-05 00:52:13.128574 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-05 00:52:13.128581 | orchestrator | Thursday 05 March 2026 00:51:44 +0000 (0:00:02.826) 0:02:12.827 ******** 2026-03-05 00:52:13.128589 | orchestrator | 2026-03-05 00:52:13.128597 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-05 00:52:13.128605 | orchestrator | Thursday 05 March 2026 00:51:44 +0000 (0:00:00.090) 0:02:12.917 ******** 2026-03-05 00:52:13.128613 | orchestrator | 2026-03-05 00:52:13.128621 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-05 00:52:13.128629 | orchestrator | Thursday 05 March 2026 00:51:44 +0000 (0:00:00.075) 0:02:12.993 ******** 2026-03-05 00:52:13.128636 | orchestrator | 2026-03-05 00:52:13.128644 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-05 00:52:13.128673 | orchestrator | Thursday 05 March 2026 00:51:44 +0000 (0:00:00.108) 0:02:13.102 ******** 2026-03-05 00:52:13.128682 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:52:13.128690 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:52:13.128698 | orchestrator | 2026-03-05 00:52:13.128710 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-05 00:52:13.128719 | orchestrator | Thursday 05 March 2026 00:51:51 +0000 (0:00:06.661) 0:02:19.763 ******** 2026-03-05 00:52:13.128726 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:52:13.128734 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:52:13.128742 | orchestrator | 2026-03-05 00:52:13.128750 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-05 00:52:13.128757 | orchestrator | Thursday 05 March 2026 00:51:58 +0000 (0:00:06.580) 0:02:26.343 ******** 2026-03-05 00:52:13.128765 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:52:13.128773 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:52:13.128781 | orchestrator | 2026-03-05 00:52:13.128793 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-05 00:52:13.128807 | orchestrator | Thursday 05 March 2026 00:52:04 +0000 (0:00:06.657) 0:02:33.001 ******** 2026-03-05 00:52:13.128814 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:52:13.128822 | orchestrator | 2026-03-05 00:52:13.128830 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-05 00:52:13.128838 | orchestrator | Thursday 05 March 2026 00:52:04 +0000 (0:00:00.209) 0:02:33.211 ******** 2026-03-05 00:52:13.128846 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:52:13.128854 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:52:13.128861 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:52:13.128869 | orchestrator | 2026-03-05 00:52:13.128877 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-05 00:52:13.128885 | orchestrator | Thursday 05 March 2026 00:52:05 +0000 (0:00:00.910) 0:02:34.121 ******** 2026-03-05 00:52:13.128893 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:52:13.128901 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:52:13.128909 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:52:13.128916 | orchestrator | 2026-03-05 00:52:13.128924 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-05 00:52:13.128932 | orchestrator | Thursday 05 March 2026 00:52:06 +0000 (0:00:00.642) 0:02:34.764 ******** 2026-03-05 00:52:13.128940 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:52:13.128948 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:52:13.128956 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:52:13.128964 | orchestrator | 2026-03-05 00:52:13.128977 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-05 00:52:13.128990 | orchestrator | Thursday 05 March 2026 00:52:07 +0000 (0:00:00.969) 0:02:35.734 ******** 2026-03-05 00:52:13.129002 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:52:13.129013 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:52:13.129029 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:52:13.129048 | orchestrator | 2026-03-05 00:52:13.129060 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-05 00:52:13.129073 | orchestrator | Thursday 05 March 2026 00:52:08 +0000 (0:00:00.918) 0:02:36.653 ******** 2026-03-05 00:52:13.129085 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:52:13.129098 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:52:13.129111 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:52:13.129124 | orchestrator | 2026-03-05 00:52:13.129137 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-05 00:52:13.129150 | orchestrator | Thursday 05 March 2026 00:52:09 +0000 (0:00:00.772) 0:02:37.425 ******** 2026-03-05 00:52:13.129164 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:52:13.129177 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:52:13.129190 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:52:13.129198 | orchestrator | 2026-03-05 00:52:13.129205 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:52:13.129213 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-05 00:52:13.129222 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-05 00:52:13.129230 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-05 00:52:13.129238 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:52:13.129246 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:52:13.129254 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:52:13.129269 | orchestrator | 2026-03-05 00:52:13.129279 | orchestrator | 2026-03-05 00:52:13.129292 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:52:13.129304 | orchestrator | Thursday 05 March 2026 00:52:10 +0000 (0:00:00.937) 0:02:38.363 ******** 2026-03-05 00:52:13.129315 | orchestrator | =============================================================================== 2026-03-05 00:52:13.129327 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 32.69s 2026-03-05 00:52:13.129339 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 24.01s 2026-03-05 00:52:13.129352 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.58s 2026-03-05 00:52:13.129365 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.58s 2026-03-05 00:52:13.129378 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 9.48s 2026-03-05 00:52:13.129391 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.92s 2026-03-05 00:52:13.129402 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.84s 2026-03-05 00:52:13.129416 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.84s 2026-03-05 00:52:13.129424 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.83s 2026-03-05 00:52:13.129432 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.80s 2026-03-05 00:52:13.129439 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.33s 2026-03-05 00:52:13.129447 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.82s 2026-03-05 00:52:13.129455 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.76s 2026-03-05 00:52:13.129468 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.74s 2026-03-05 00:52:13.129476 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.66s 2026-03-05 00:52:13.129484 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.58s 2026-03-05 00:52:13.129491 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.58s 2026-03-05 00:52:13.129499 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.56s 2026-03-05 00:52:13.129507 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.40s 2026-03-05 00:52:13.129514 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.33s 2026-03-05 00:52:13.129522 | orchestrator | 2026-03-05 00:52:13 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:52:13.129531 | orchestrator | 2026-03-05 00:52:13 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:52:13.129539 | orchestrator | 2026-03-05 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:16.172028 | orchestrator | 2026-03-05 00:52:16 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:52:16.173647 | orchestrator | 2026-03-05 00:52:16 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:52:16.173765 | orchestrator | 2026-03-05 00:52:16 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:19.212402 | orchestrator | 2026-03-05 00:52:19 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:52:19.213767 | orchestrator | 2026-03-05 00:52:19 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:52:19.213811 | orchestrator | 2026-03-05 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:22.251602 | orchestrator | 2026-03-05 00:52:22 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:52:22.252190 | orchestrator | 2026-03-05 00:52:22 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:52:22.252517 | orchestrator | 2026-03-05 00:52:22 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:25.295353 | orchestrator | 2026-03-05 00:52:25 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:52:25.300109 | orchestrator | 2026-03-05 00:52:25 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:52:25.300986 | orchestrator | 2026-03-05 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:28.338306 | orchestrator | 2026-03-05 00:52:28 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:52:28.339502 | orchestrator | 2026-03-05 00:52:28 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:52:28.339545 | orchestrator | 2026-03-05 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:31.379608 | orchestrator | 2026-03-05 00:52:31 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:52:31.381196 | orchestrator | 2026-03-05 00:52:31 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:52:31.381286 | orchestrator | 2026-03-05 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:34.428834 | orchestrator | 2026-03-05 00:52:34 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:52:34.430737 | orchestrator | 2026-03-05 00:52:34 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:52:34.430802 | orchestrator | 2026-03-05 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:37.475768 | orchestrator | 2026-03-05 00:52:37 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:52:37.482903 | orchestrator | 2026-03-05 00:52:37 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:52:37.482996 | orchestrator | 2026-03-05 00:52:37 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:40.517476 | orchestrator | 2026-03-05 00:52:40 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:52:40.518068 | orchestrator | 2026-03-05 00:52:40 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:52:40.518119 | orchestrator | 2026-03-05 00:52:40 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:43.568069 | orchestrator | 2026-03-05 00:52:43 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:52:43.574429 | orchestrator | 2026-03-05 00:52:43 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:52:43.574515 | orchestrator | 2026-03-05 00:52:43 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:46.616907 | orchestrator | 2026-03-05 00:52:46 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:52:46.618257 | orchestrator | 2026-03-05 00:52:46 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:52:46.618298 | orchestrator | 2026-03-05 00:52:46 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:49.657385 | orchestrator | 2026-03-05 00:52:49 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:52:49.658679 | orchestrator | 2026-03-05 00:52:49 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:52:49.658715 | orchestrator | 2026-03-05 00:52:49 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:52.693005 | orchestrator | 2026-03-05 00:52:52 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:52:52.694350 | orchestrator | 2026-03-05 00:52:52 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:52:52.694406 | orchestrator | 2026-03-05 00:52:52 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:55.752098 | orchestrator | 2026-03-05 00:52:55 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:52:55.754175 | orchestrator | 2026-03-05 00:52:55 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:52:55.754230 | orchestrator | 2026-03-05 00:52:55 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:58.797838 | orchestrator | 2026-03-05 00:52:58 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:52:58.798527 | orchestrator | 2026-03-05 00:52:58 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:52:58.798702 | orchestrator | 2026-03-05 00:52:58 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:01.835142 | orchestrator | 2026-03-05 00:53:01 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:53:01.836066 | orchestrator | 2026-03-05 00:53:01 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:53:01.836121 | orchestrator | 2026-03-05 00:53:01 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:04.896961 | orchestrator | 2026-03-05 00:53:04 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:53:04.898493 | orchestrator | 2026-03-05 00:53:04 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:53:04.898537 | orchestrator | 2026-03-05 00:53:04 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:07.952471 | orchestrator | 2026-03-05 00:53:07 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:53:07.956742 | orchestrator | 2026-03-05 00:53:07 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:53:07.956881 | orchestrator | 2026-03-05 00:53:07 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:11.002841 | orchestrator | 2026-03-05 00:53:11 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:53:11.003609 | orchestrator | 2026-03-05 00:53:11 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:53:11.003868 | orchestrator | 2026-03-05 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:14.067485 | orchestrator | 2026-03-05 00:53:14 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:53:14.070782 | orchestrator | 2026-03-05 00:53:14 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:53:14.070906 | orchestrator | 2026-03-05 00:53:14 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:17.114125 | orchestrator | 2026-03-05 00:53:17 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:53:17.114901 | orchestrator | 2026-03-05 00:53:17 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:53:17.114922 | orchestrator | 2026-03-05 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:20.149310 | orchestrator | 2026-03-05 00:53:20 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:53:20.151868 | orchestrator | 2026-03-05 00:53:20 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:53:20.151913 | orchestrator | 2026-03-05 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:23.190853 | orchestrator | 2026-03-05 00:53:23 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:53:23.191006 | orchestrator | 2026-03-05 00:53:23 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:53:23.191013 | orchestrator | 2026-03-05 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:26.225112 | orchestrator | 2026-03-05 00:53:26 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:53:26.225171 | orchestrator | 2026-03-05 00:53:26 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:53:26.225181 | orchestrator | 2026-03-05 00:53:26 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:29.276540 | orchestrator | 2026-03-05 00:53:29 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:53:29.279715 | orchestrator | 2026-03-05 00:53:29 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:53:29.279957 | orchestrator | 2026-03-05 00:53:29 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:32.325961 | orchestrator | 2026-03-05 00:53:32 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:53:32.327117 | orchestrator | 2026-03-05 00:53:32 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:53:32.327152 | orchestrator | 2026-03-05 00:53:32 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:35.368339 | orchestrator | 2026-03-05 00:53:35 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:53:35.370615 | orchestrator | 2026-03-05 00:53:35 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:53:35.370674 | orchestrator | 2026-03-05 00:53:35 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:38.407675 | orchestrator | 2026-03-05 00:53:38 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:53:38.408836 | orchestrator | 2026-03-05 00:53:38 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:53:38.408953 | orchestrator | 2026-03-05 00:53:38 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:41.456403 | orchestrator | 2026-03-05 00:53:41 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:53:41.457400 | orchestrator | 2026-03-05 00:53:41 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:53:41.457603 | orchestrator | 2026-03-05 00:53:41 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:44.498214 | orchestrator | 2026-03-05 00:53:44 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:53:44.500086 | orchestrator | 2026-03-05 00:53:44 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:53:44.500136 | orchestrator | 2026-03-05 00:53:44 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:47.546410 | orchestrator | 2026-03-05 00:53:47 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:53:47.546498 | orchestrator | 2026-03-05 00:53:47 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:53:47.546841 | orchestrator | 2026-03-05 00:53:47 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:50.592207 | orchestrator | 2026-03-05 00:53:50 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:53:50.594108 | orchestrator | 2026-03-05 00:53:50 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:53:50.594194 | orchestrator | 2026-03-05 00:53:50 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:53.648496 | orchestrator | 2026-03-05 00:53:53 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:53:53.648652 | orchestrator | 2026-03-05 00:53:53 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:53:53.648667 | orchestrator | 2026-03-05 00:53:53 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:56.679521 | orchestrator | 2026-03-05 00:53:56 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:53:56.679890 | orchestrator | 2026-03-05 00:53:56 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:53:56.679915 | orchestrator | 2026-03-05 00:53:56 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:59.739258 | orchestrator | 2026-03-05 00:53:59 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:53:59.740194 | orchestrator | 2026-03-05 00:53:59 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:53:59.740253 | orchestrator | 2026-03-05 00:53:59 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:02.780665 | orchestrator | 2026-03-05 00:54:02 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:54:02.782924 | orchestrator | 2026-03-05 00:54:02 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:54:02.782973 | orchestrator | 2026-03-05 00:54:02 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:05.831346 | orchestrator | 2026-03-05 00:54:05 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:54:05.831673 | orchestrator | 2026-03-05 00:54:05 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:54:05.831798 | orchestrator | 2026-03-05 00:54:05 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:08.878580 | orchestrator | 2026-03-05 00:54:08 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:54:08.880241 | orchestrator | 2026-03-05 00:54:08 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:54:08.880303 | orchestrator | 2026-03-05 00:54:08 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:11.920147 | orchestrator | 2026-03-05 00:54:11 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:54:11.921657 | orchestrator | 2026-03-05 00:54:11 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:54:11.923335 | orchestrator | 2026-03-05 00:54:11 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:14.972321 | orchestrator | 2026-03-05 00:54:14 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:54:14.972804 | orchestrator | 2026-03-05 00:54:14 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:54:14.972843 | orchestrator | 2026-03-05 00:54:14 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:18.031339 | orchestrator | 2026-03-05 00:54:18 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:54:18.032484 | orchestrator | 2026-03-05 00:54:18 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:54:18.032529 | orchestrator | 2026-03-05 00:54:18 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:21.078062 | orchestrator | 2026-03-05 00:54:21 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:54:21.080282 | orchestrator | 2026-03-05 00:54:21 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:54:21.081330 | orchestrator | 2026-03-05 00:54:21 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:24.120643 | orchestrator | 2026-03-05 00:54:24 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:54:24.121117 | orchestrator | 2026-03-05 00:54:24 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:54:24.121183 | orchestrator | 2026-03-05 00:54:24 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:27.179195 | orchestrator | 2026-03-05 00:54:27 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:54:27.180330 | orchestrator | 2026-03-05 00:54:27 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:54:27.181902 | orchestrator | 2026-03-05 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:30.223536 | orchestrator | 2026-03-05 00:54:30 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:54:30.225997 | orchestrator | 2026-03-05 00:54:30 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:54:30.226132 | orchestrator | 2026-03-05 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:33.280755 | orchestrator | 2026-03-05 00:54:33 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:54:33.281389 | orchestrator | 2026-03-05 00:54:33 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:54:33.281526 | orchestrator | 2026-03-05 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:36.336651 | orchestrator | 2026-03-05 00:54:36 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:54:36.337822 | orchestrator | 2026-03-05 00:54:36 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:54:36.337908 | orchestrator | 2026-03-05 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:39.376889 | orchestrator | 2026-03-05 00:54:39 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:54:39.379728 | orchestrator | 2026-03-05 00:54:39 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:54:39.380051 | orchestrator | 2026-03-05 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:42.438938 | orchestrator | 2026-03-05 00:54:42 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:54:42.443606 | orchestrator | 2026-03-05 00:54:42 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:54:42.443692 | orchestrator | 2026-03-05 00:54:42 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:45.480151 | orchestrator | 2026-03-05 00:54:45 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:54:45.481653 | orchestrator | 2026-03-05 00:54:45 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:54:45.481705 | orchestrator | 2026-03-05 00:54:45 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:48.531758 | orchestrator | 2026-03-05 00:54:48 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:54:48.533825 | orchestrator | 2026-03-05 00:54:48 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:54:48.533914 | orchestrator | 2026-03-05 00:54:48 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:51.584853 | orchestrator | 2026-03-05 00:54:51 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:54:51.586896 | orchestrator | 2026-03-05 00:54:51 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:54:51.587445 | orchestrator | 2026-03-05 00:54:51 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:54.644728 | orchestrator | 2026-03-05 00:54:54 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:54:54.646252 | orchestrator | 2026-03-05 00:54:54 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:54:54.646325 | orchestrator | 2026-03-05 00:54:54 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:57.684150 | orchestrator | 2026-03-05 00:54:57 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:54:57.686600 | orchestrator | 2026-03-05 00:54:57 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:54:57.686672 | orchestrator | 2026-03-05 00:54:57 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:00.738663 | orchestrator | 2026-03-05 00:55:00 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:55:00.740468 | orchestrator | 2026-03-05 00:55:00 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:55:00.740910 | orchestrator | 2026-03-05 00:55:00 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:03.778768 | orchestrator | 2026-03-05 00:55:03 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:55:03.779756 | orchestrator | 2026-03-05 00:55:03 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:55:03.779792 | orchestrator | 2026-03-05 00:55:03 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:06.822613 | orchestrator | 2026-03-05 00:55:06 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:55:06.824103 | orchestrator | 2026-03-05 00:55:06 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:55:06.824157 | orchestrator | 2026-03-05 00:55:06 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:09.872706 | orchestrator | 2026-03-05 00:55:09 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:55:09.874857 | orchestrator | 2026-03-05 00:55:09 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:55:09.875062 | orchestrator | 2026-03-05 00:55:09 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:12.918810 | orchestrator | 2026-03-05 00:55:12 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:55:12.921451 | orchestrator | 2026-03-05 00:55:12 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:55:12.921558 | orchestrator | 2026-03-05 00:55:12 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:15.969591 | orchestrator | 2026-03-05 00:55:15 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:55:15.969705 | orchestrator | 2026-03-05 00:55:15 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:55:15.969721 | orchestrator | 2026-03-05 00:55:15 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:19.027474 | orchestrator | 2026-03-05 00:55:19 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:55:19.027570 | orchestrator | 2026-03-05 00:55:19 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:55:19.027638 | orchestrator | 2026-03-05 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:22.129699 | orchestrator | 2026-03-05 00:55:22 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:55:22.129819 | orchestrator | 2026-03-05 00:55:22 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:55:22.129836 | orchestrator | 2026-03-05 00:55:22 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:25.156210 | orchestrator | 2026-03-05 00:55:25 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:55:25.158507 | orchestrator | 2026-03-05 00:55:25 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:55:25.158597 | orchestrator | 2026-03-05 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:28.210226 | orchestrator | 2026-03-05 00:55:28 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:55:28.212078 | orchestrator | 2026-03-05 00:55:28 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:55:28.212099 | orchestrator | 2026-03-05 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:31.262892 | orchestrator | 2026-03-05 00:55:31 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:55:31.264187 | orchestrator | 2026-03-05 00:55:31 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:55:31.264281 | orchestrator | 2026-03-05 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:34.302352 | orchestrator | 2026-03-05 00:55:34 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:55:34.305782 | orchestrator | 2026-03-05 00:55:34 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:55:34.305853 | orchestrator | 2026-03-05 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:37.353800 | orchestrator | 2026-03-05 00:55:37 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:55:37.359921 | orchestrator | 2026-03-05 00:55:37 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:55:37.360031 | orchestrator | 2026-03-05 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:40.405178 | orchestrator | 2026-03-05 00:55:40 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:55:40.407572 | orchestrator | 2026-03-05 00:55:40 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:55:40.407648 | orchestrator | 2026-03-05 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:43.454915 | orchestrator | 2026-03-05 00:55:43 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:55:43.456420 | orchestrator | 2026-03-05 00:55:43 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:55:43.456479 | orchestrator | 2026-03-05 00:55:43 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:46.511349 | orchestrator | 2026-03-05 00:55:46 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:55:46.513566 | orchestrator | 2026-03-05 00:55:46 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:55:46.513650 | orchestrator | 2026-03-05 00:55:46 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:49.562274 | orchestrator | 2026-03-05 00:55:49 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:55:49.567170 | orchestrator | 2026-03-05 00:55:49 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:55:49.567258 | orchestrator | 2026-03-05 00:55:49 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:52.612498 | orchestrator | 2026-03-05 00:55:52 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:55:52.613729 | orchestrator | 2026-03-05 00:55:52 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state STARTED 2026-03-05 00:55:52.613768 | orchestrator | 2026-03-05 00:55:52 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:55.666697 | orchestrator | 2026-03-05 00:55:55 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:55:55.666973 | orchestrator | 2026-03-05 00:55:55 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:55:55.668849 | orchestrator | 2026-03-05 00:55:55 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:55:55.678098 | orchestrator | 2026-03-05 00:55:55 | INFO  | Task 54ba0c27-c6ab-48b5-a972-50508998d0d1 is in state SUCCESS 2026-03-05 00:55:55.679457 | orchestrator | 2026-03-05 00:55:55.679535 | orchestrator | 2026-03-05 00:55:55.679547 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 00:55:55.679557 | orchestrator | 2026-03-05 00:55:55.679566 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 00:55:55.679575 | orchestrator | Thursday 05 March 2026 00:48:16 +0000 (0:00:00.361) 0:00:00.361 ******** 2026-03-05 00:55:55.679583 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:55.679611 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:55.679621 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:55.679629 | orchestrator | 2026-03-05 00:55:55.679637 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 00:55:55.679715 | orchestrator | Thursday 05 March 2026 00:48:16 +0000 (0:00:00.387) 0:00:00.749 ******** 2026-03-05 00:55:55.679725 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-05 00:55:55.679734 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-05 00:55:55.679742 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-05 00:55:55.679752 | orchestrator | 2026-03-05 00:55:55.679765 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-05 00:55:55.679777 | orchestrator | 2026-03-05 00:55:55.679799 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-05 00:55:55.679812 | orchestrator | Thursday 05 March 2026 00:48:17 +0000 (0:00:00.823) 0:00:01.573 ******** 2026-03-05 00:55:55.679824 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:55.679836 | orchestrator | 2026-03-05 00:55:55.679848 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-05 00:55:55.679917 | orchestrator | Thursday 05 March 2026 00:48:18 +0000 (0:00:01.207) 0:00:02.780 ******** 2026-03-05 00:55:55.679999 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:55.680037 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:55.680052 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:55.680100 | orchestrator | 2026-03-05 00:55:55.680115 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-05 00:55:55.680125 | orchestrator | Thursday 05 March 2026 00:48:19 +0000 (0:00:00.987) 0:00:03.768 ******** 2026-03-05 00:55:55.680134 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:55.680144 | orchestrator | 2026-03-05 00:55:55.680153 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-05 00:55:55.680163 | orchestrator | Thursday 05 March 2026 00:48:21 +0000 (0:00:01.720) 0:00:05.488 ******** 2026-03-05 00:55:55.680172 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:55.680209 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:55.680219 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:55.680228 | orchestrator | 2026-03-05 00:55:55.680298 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-05 00:55:55.680309 | orchestrator | Thursday 05 March 2026 00:48:22 +0000 (0:00:00.804) 0:00:06.292 ******** 2026-03-05 00:55:55.680319 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-05 00:55:55.680329 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-05 00:55:55.680338 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-05 00:55:55.680346 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-05 00:55:55.680355 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-05 00:55:55.680363 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-05 00:55:55.680371 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-05 00:55:55.680380 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-05 00:55:55.680388 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-05 00:55:55.680397 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-05 00:55:55.680405 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-05 00:55:55.680426 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-05 00:55:55.680435 | orchestrator | 2026-03-05 00:55:55.680443 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-05 00:55:55.680451 | orchestrator | Thursday 05 March 2026 00:48:26 +0000 (0:00:03.788) 0:00:10.081 ******** 2026-03-05 00:55:55.680470 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-05 00:55:55.680479 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-05 00:55:55.680487 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-05 00:55:55.680496 | orchestrator | 2026-03-05 00:55:55.680504 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-05 00:55:55.680512 | orchestrator | Thursday 05 March 2026 00:48:26 +0000 (0:00:00.881) 0:00:10.962 ******** 2026-03-05 00:55:55.680520 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-05 00:55:55.680528 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-05 00:55:55.680537 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-05 00:55:55.680545 | orchestrator | 2026-03-05 00:55:55.680573 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-05 00:55:55.680615 | orchestrator | Thursday 05 March 2026 00:48:29 +0000 (0:00:02.187) 0:00:13.153 ******** 2026-03-05 00:55:55.680625 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-05 00:55:55.680633 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.680657 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-05 00:55:55.680667 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.680696 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-05 00:55:55.680705 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.680714 | orchestrator | 2026-03-05 00:55:55.680722 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-05 00:55:55.680756 | orchestrator | Thursday 05 March 2026 00:48:31 +0000 (0:00:02.649) 0:00:15.803 ******** 2026-03-05 00:55:55.680768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-05 00:55:55.680860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-05 00:55:55.680872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-05 00:55:55.680881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:55:55.680895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:55:55.680910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:55:55.680921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-05 00:55:55.680935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-05 00:55:55.680944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-05 00:55:55.680953 | orchestrator | 2026-03-05 00:55:55.680962 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-05 00:55:55.680970 | orchestrator | Thursday 05 March 2026 00:48:34 +0000 (0:00:03.146) 0:00:18.949 ******** 2026-03-05 00:55:55.680979 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.680988 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.680996 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.681097 | orchestrator | 2026-03-05 00:55:55.681110 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-05 00:55:55.681119 | orchestrator | Thursday 05 March 2026 00:48:36 +0000 (0:00:01.073) 0:00:20.022 ******** 2026-03-05 00:55:55.681127 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-05 00:55:55.681135 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-05 00:55:55.681143 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-05 00:55:55.681151 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-05 00:55:55.681159 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-05 00:55:55.681167 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-05 00:55:55.681175 | orchestrator | 2026-03-05 00:55:55.681183 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-05 00:55:55.681192 | orchestrator | Thursday 05 March 2026 00:48:38 +0000 (0:00:02.355) 0:00:22.378 ******** 2026-03-05 00:55:55.681201 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.681241 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.681249 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.681258 | orchestrator | 2026-03-05 00:55:55.681266 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-05 00:55:55.681275 | orchestrator | Thursday 05 March 2026 00:48:39 +0000 (0:00:01.372) 0:00:23.751 ******** 2026-03-05 00:55:55.681283 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:55.681291 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:55.681307 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:55.681316 | orchestrator | 2026-03-05 00:55:55.681324 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-05 00:55:55.681331 | orchestrator | Thursday 05 March 2026 00:48:42 +0000 (0:00:03.097) 0:00:26.849 ******** 2026-03-05 00:55:55.681346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.681372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.681382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.681393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1622323e394c8dda47dfdda5fde9da334b013db0', '__omit_place_holder__1622323e394c8dda47dfdda5fde9da334b013db0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-05 00:55:55.681402 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.681411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.681419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.681440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.681449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1622323e394c8dda47dfdda5fde9da334b013db0', '__omit_place_holder__1622323e394c8dda47dfdda5fde9da334b013db0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-05 00:55:55.681486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.681496 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.681505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.681513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.681521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1622323e394c8dda47dfdda5fde9da334b013db0', '__omit_place_holder__1622323e394c8dda47dfdda5fde9da334b013db0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-05 00:55:55.681530 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.681538 | orchestrator | 2026-03-05 00:55:55.681545 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-05 00:55:55.681554 | orchestrator | Thursday 05 March 2026 00:48:44 +0000 (0:00:01.948) 0:00:28.797 ******** 2026-03-05 00:55:55.681562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-05 00:55:55.681577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-05 00:55:55.681619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-05 00:55:55.681628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:55:55.681635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:55:55.681642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.681649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.681660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1622323e394c8dda47dfdda5fde9da334b013db0', '__omit_place_holder__1622323e394c8dda47dfdda5fde9da334b013db0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-05 00:55:55.681672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1622323e394c8dda47dfdda5fde9da334b013db0', '__omit_place_holder__1622323e394c8dda47dfdda5fde9da334b013db0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-05 00:55:55.681684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:55:55.681691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.681698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1622323e394c8dda47dfdda5fde9da334b013db0', '__omit_place_holder__1622323e394c8dda47dfdda5fde9da334b013db0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-05 00:55:55.681705 | orchestrator | 2026-03-05 00:55:55.681712 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-05 00:55:55.681719 | orchestrator | Thursday 05 March 2026 00:48:48 +0000 (0:00:03.296) 0:00:32.094 ******** 2026-03-05 00:55:55.681726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-05 00:55:55.681734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-05 00:55:55.681753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-05 00:55:55.681767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:55:55.681775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:55:55.681782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:55:55.681789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-05 00:55:55.681798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-05 00:55:55.681810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-05 00:55:55.681817 | orchestrator | 2026-03-05 00:55:55.681843 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-05 00:55:55.681850 | orchestrator | Thursday 05 March 2026 00:48:52 +0000 (0:00:04.569) 0:00:36.664 ******** 2026-03-05 00:55:55.681857 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-05 00:55:55.681865 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-05 00:55:55.681872 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-05 00:55:55.681878 | orchestrator | 2026-03-05 00:55:55.681885 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-05 00:55:55.681892 | orchestrator | Thursday 05 March 2026 00:48:56 +0000 (0:00:03.815) 0:00:40.479 ******** 2026-03-05 00:55:55.681910 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-05 00:55:55.681918 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-05 00:55:55.681924 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-05 00:55:55.681931 | orchestrator | 2026-03-05 00:55:55.682955 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-05 00:55:55.682999 | orchestrator | Thursday 05 March 2026 00:49:04 +0000 (0:00:08.092) 0:00:48.571 ******** 2026-03-05 00:55:55.683030 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.683042 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.683054 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.683066 | orchestrator | 2026-03-05 00:55:55.683073 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-05 00:55:55.683080 | orchestrator | Thursday 05 March 2026 00:49:05 +0000 (0:00:00.996) 0:00:49.567 ******** 2026-03-05 00:55:55.683087 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-05 00:55:55.683095 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-05 00:55:55.683102 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-05 00:55:55.683109 | orchestrator | 2026-03-05 00:55:55.683116 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-05 00:55:55.683168 | orchestrator | Thursday 05 March 2026 00:49:10 +0000 (0:00:04.823) 0:00:54.391 ******** 2026-03-05 00:55:55.683217 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-05 00:55:55.683225 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-05 00:55:55.683232 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-05 00:55:55.683239 | orchestrator | 2026-03-05 00:55:55.683246 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-05 00:55:55.683263 | orchestrator | Thursday 05 March 2026 00:49:14 +0000 (0:00:03.732) 0:00:58.124 ******** 2026-03-05 00:55:55.683270 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-05 00:55:55.683277 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-05 00:55:55.683283 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-05 00:55:55.683290 | orchestrator | 2026-03-05 00:55:55.683297 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-05 00:55:55.683303 | orchestrator | Thursday 05 March 2026 00:49:16 +0000 (0:00:02.852) 0:01:00.976 ******** 2026-03-05 00:55:55.683310 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-05 00:55:55.683317 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-05 00:55:55.683324 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-05 00:55:55.683330 | orchestrator | 2026-03-05 00:55:55.683337 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-05 00:55:55.683344 | orchestrator | Thursday 05 March 2026 00:49:19 +0000 (0:00:02.429) 0:01:03.406 ******** 2026-03-05 00:55:55.683351 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:55.683358 | orchestrator | 2026-03-05 00:55:55.683364 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-05 00:55:55.683371 | orchestrator | Thursday 05 March 2026 00:49:20 +0000 (0:00:01.071) 0:01:04.478 ******** 2026-03-05 00:55:55.683379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-05 00:55:55.683394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-05 00:55:55.683411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-05 00:55:55.683419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:55:55.683430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:55:55.683438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:55:55.683447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-05 00:55:55.683463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-05 00:55:55.683471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-05 00:55:55.683479 | orchestrator | 2026-03-05 00:55:55.683487 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-05 00:55:55.683495 | orchestrator | Thursday 05 March 2026 00:49:24 +0000 (0:00:03.655) 0:01:08.134 ******** 2026-03-05 00:55:55.683509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.683518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.683531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.683538 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.683547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.683555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.683566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.683575 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.683583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.683607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.683621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.683629 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.683637 | orchestrator | 2026-03-05 00:55:55.683644 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-05 00:55:55.683651 | orchestrator | Thursday 05 March 2026 00:49:25 +0000 (0:00:00.899) 0:01:09.033 ******** 2026-03-05 00:55:55.683657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.683665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.683672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.683678 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.683697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.683709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.683770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.683778 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.683785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.683792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.683799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.683806 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.683813 | orchestrator | 2026-03-05 00:55:55.683820 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-05 00:55:55.683827 | orchestrator | Thursday 05 March 2026 00:49:26 +0000 (0:00:01.502) 0:01:10.536 ******** 2026-03-05 00:55:55.683837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.683849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.683897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.683905 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.683912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.683919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.683927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.683933 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.683940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.683951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.683992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.684001 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.684023 | orchestrator | 2026-03-05 00:55:55.684030 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-05 00:55:55.684037 | orchestrator | Thursday 05 March 2026 00:49:27 +0000 (0:00:01.457) 0:01:11.994 ******** 2026-03-05 00:55:55.684044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.684051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.684058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.684065 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.684072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.684117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.684131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.684138 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.684150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.684158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.684165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.684172 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.684178 | orchestrator | 2026-03-05 00:55:55.684185 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-05 00:55:55.684192 | orchestrator | Thursday 05 March 2026 00:49:29 +0000 (0:00:01.506) 0:01:13.500 ******** 2026-03-05 00:55:55.684199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.684206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.684221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.684228 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.684238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.684246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.684252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.684259 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.684266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.684273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.684284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.684297 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.684304 | orchestrator | 2026-03-05 00:55:55.684311 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-05 00:55:55.684318 | orchestrator | Thursday 05 March 2026 00:49:31 +0000 (0:00:01.958) 0:01:15.459 ******** 2026-03-05 00:55:55.684325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.684336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.684343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.684350 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.684357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.684364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.684378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.684385 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.684395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.684405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.684412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.684419 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.684426 | orchestrator | 2026-03-05 00:55:55.684433 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-05 00:55:55.684440 | orchestrator | Thursday 05 March 2026 00:49:32 +0000 (0:00:01.217) 0:01:16.676 ******** 2026-03-05 00:55:55.684446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.684453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.684465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.684472 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.684482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.684489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.684503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.684510 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.684517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.684524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.684531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.684555 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.684562 | orchestrator | 2026-03-05 00:55:55.684569 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-05 00:55:55.684575 | orchestrator | Thursday 05 March 2026 00:49:33 +0000 (0:00:01.289) 0:01:17.966 ******** 2026-03-05 00:55:55.684582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.684593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.684604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.684612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.684629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.684636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.684648 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.684654 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.684661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-05 00:55:55.684668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:55:55.684679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:55:55.684685 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.684692 | orchestrator | 2026-03-05 00:55:55.684699 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-05 00:55:55.684706 | orchestrator | Thursday 05 March 2026 00:49:35 +0000 (0:00:01.293) 0:01:19.259 ******** 2026-03-05 00:55:55.684732 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-05 00:55:55.684740 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-05 00:55:55.684752 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-05 00:55:55.684759 | orchestrator | 2026-03-05 00:55:55.684766 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-05 00:55:55.684772 | orchestrator | Thursday 05 March 2026 00:49:37 +0000 (0:00:02.164) 0:01:21.424 ******** 2026-03-05 00:55:55.684779 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-05 00:55:55.684786 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-05 00:55:55.684854 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-05 00:55:55.684872 | orchestrator | 2026-03-05 00:55:55.684879 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-05 00:55:55.684886 | orchestrator | Thursday 05 March 2026 00:49:38 +0000 (0:00:01.561) 0:01:22.986 ******** 2026-03-05 00:55:55.684893 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-05 00:55:55.684917 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-05 00:55:55.684924 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-05 00:55:55.684931 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-05 00:55:55.684938 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.684945 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-05 00:55:55.684951 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.684958 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-05 00:55:55.684965 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.684971 | orchestrator | 2026-03-05 00:55:55.684978 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-05 00:55:55.684985 | orchestrator | Thursday 05 March 2026 00:49:40 +0000 (0:00:01.280) 0:01:24.267 ******** 2026-03-05 00:55:55.684992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-05 00:55:55.684999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-05 00:55:55.685061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-05 00:55:55.685075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:55:55.685082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:55:55.685104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:55:55.685111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-05 00:55:55.685119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-05 00:55:55.685127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-05 00:55:55.685134 | orchestrator | 2026-03-05 00:55:55.685141 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-05 00:55:55.685148 | orchestrator | Thursday 05 March 2026 00:49:43 +0000 (0:00:03.259) 0:01:27.526 ******** 2026-03-05 00:55:55.685155 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:55.685162 | orchestrator | 2026-03-05 00:55:55.685169 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-05 00:55:55.685176 | orchestrator | Thursday 05 March 2026 00:49:44 +0000 (0:00:00.803) 0:01:28.330 ******** 2026-03-05 00:55:55.685185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-05 00:55:55.685202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-05 00:55:55.685210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.685217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.685224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-05 00:55:55.685252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-05 00:55:55.685260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-05 00:55:55.685983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-05 00:55:55.686079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.686094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.686105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.686116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.686127 | orchestrator | 2026-03-05 00:55:55.686138 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-05 00:55:55.686149 | orchestrator | Thursday 05 March 2026 00:49:50 +0000 (0:00:06.127) 0:01:34.457 ******** 2026-03-05 00:55:55.686168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-05 00:55:55.686199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-05 00:55:55.686209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-05 00:55:55.686219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.686229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.686240 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.686251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-05 00:55:55.686336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.686355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.686365 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.686382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-05 00:55:55.686392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-05 00:55:55.686402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.686412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.686422 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.686432 | orchestrator | 2026-03-05 00:55:55.686442 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-05 00:55:55.686453 | orchestrator | Thursday 05 March 2026 00:49:53 +0000 (0:00:02.918) 0:01:37.375 ******** 2026-03-05 00:55:55.686463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-05 00:55:55.686480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-05 00:55:55.686498 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.686508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-05 00:55:55.686519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-05 00:55:55.686531 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.686541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-05 00:55:55.686553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-05 00:55:55.686563 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.686574 | orchestrator | 2026-03-05 00:55:55.686590 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-05 00:55:55.686600 | orchestrator | Thursday 05 March 2026 00:49:54 +0000 (0:00:01.145) 0:01:38.521 ******** 2026-03-05 00:55:55.686610 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.686622 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.686633 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.686679 | orchestrator | 2026-03-05 00:55:55.686686 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-05 00:55:55.686693 | orchestrator | Thursday 05 March 2026 00:49:56 +0000 (0:00:01.572) 0:01:40.093 ******** 2026-03-05 00:55:55.686700 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.686707 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.686712 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.686718 | orchestrator | 2026-03-05 00:55:55.686724 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-05 00:55:55.686730 | orchestrator | Thursday 05 March 2026 00:49:58 +0000 (0:00:02.287) 0:01:42.380 ******** 2026-03-05 00:55:55.686736 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:55.686741 | orchestrator | 2026-03-05 00:55:55.686747 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-05 00:55:55.686753 | orchestrator | Thursday 05 March 2026 00:49:59 +0000 (0:00:00.825) 0:01:43.205 ******** 2026-03-05 00:55:55.686761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 00:55:55.686768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.686785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 00:55:55.686791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.686801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.686807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.686814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 00:55:55.686820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.686833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.686839 | orchestrator | 2026-03-05 00:55:55.686845 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-05 00:55:55.686851 | orchestrator | Thursday 05 March 2026 00:50:04 +0000 (0:00:05.336) 0:01:48.542 ******** 2026-03-05 00:55:55.686861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-05 00:55:55.686867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.686873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.686879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-05 00:55:55.686894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.686900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.686906 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.686912 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.686926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-05 00:55:55.686936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.686946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.686962 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.686971 | orchestrator | 2026-03-05 00:55:55.686981 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-05 00:55:55.686990 | orchestrator | Thursday 05 March 2026 00:50:05 +0000 (0:00:00.987) 0:01:49.529 ******** 2026-03-05 00:55:55.687001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-05 00:55:55.687067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-05 00:55:55.687078 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.687088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-05 00:55:55.687098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-05 00:55:55.687107 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.687122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-05 00:55:55.687132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-05 00:55:55.687141 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.687150 | orchestrator | 2026-03-05 00:55:55.687160 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-05 00:55:55.687170 | orchestrator | Thursday 05 March 2026 00:50:06 +0000 (0:00:01.438) 0:01:50.968 ******** 2026-03-05 00:55:55.687179 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.687188 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.687198 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.687208 | orchestrator | 2026-03-05 00:55:55.687217 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-05 00:55:55.687228 | orchestrator | Thursday 05 March 2026 00:50:08 +0000 (0:00:01.639) 0:01:52.608 ******** 2026-03-05 00:55:55.687234 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.687240 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.687246 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.687251 | orchestrator | 2026-03-05 00:55:55.687262 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-05 00:55:55.687268 | orchestrator | Thursday 05 March 2026 00:50:11 +0000 (0:00:02.707) 0:01:55.315 ******** 2026-03-05 00:55:55.687274 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.687280 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.687286 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.687292 | orchestrator | 2026-03-05 00:55:55.687297 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-05 00:55:55.687303 | orchestrator | Thursday 05 March 2026 00:50:11 +0000 (0:00:00.356) 0:01:55.671 ******** 2026-03-05 00:55:55.687309 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:55.687315 | orchestrator | 2026-03-05 00:55:55.687320 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-05 00:55:55.687335 | orchestrator | Thursday 05 March 2026 00:50:13 +0000 (0:00:01.913) 0:01:57.585 ******** 2026-03-05 00:55:55.687342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-05 00:55:55.687349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-05 00:55:55.687359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-05 00:55:55.687365 | orchestrator | 2026-03-05 00:55:55.687371 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-05 00:55:55.687377 | orchestrator | Thursday 05 March 2026 00:50:18 +0000 (0:00:05.073) 0:02:02.658 ******** 2026-03-05 00:55:55.687386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-05 00:55:55.687392 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.687398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-05 00:55:55.687408 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.687415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-05 00:55:55.687420 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.687426 | orchestrator | 2026-03-05 00:55:55.687432 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-05 00:55:55.687438 | orchestrator | Thursday 05 March 2026 00:50:21 +0000 (0:00:02.942) 0:02:05.601 ******** 2026-03-05 00:55:55.687445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-05 00:55:55.687453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-05 00:55:55.687460 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.687468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-05 00:55:55.687474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-05 00:55:55.687479 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.687488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-05 00:55:55.687497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-05 00:55:55.687502 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.687507 | orchestrator | 2026-03-05 00:55:55.687513 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-05 00:55:55.687518 | orchestrator | Thursday 05 March 2026 00:50:25 +0000 (0:00:03.477) 0:02:09.078 ******** 2026-03-05 00:55:55.687523 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.687528 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.687533 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.687538 | orchestrator | 2026-03-05 00:55:55.687543 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-05 00:55:55.687548 | orchestrator | Thursday 05 March 2026 00:50:26 +0000 (0:00:00.941) 0:02:10.020 ******** 2026-03-05 00:55:55.687553 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.687558 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.687563 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.687568 | orchestrator | 2026-03-05 00:55:55.687573 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-05 00:55:55.687579 | orchestrator | Thursday 05 March 2026 00:50:27 +0000 (0:00:01.280) 0:02:11.300 ******** 2026-03-05 00:55:55.687584 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:55.687589 | orchestrator | 2026-03-05 00:55:55.687594 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-05 00:55:55.687599 | orchestrator | Thursday 05 March 2026 00:50:28 +0000 (0:00:00.714) 0:02:12.014 ******** 2026-03-05 00:55:55.687605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 00:55:55.687611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.687619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.687633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.687639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 00:55:55.687644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.687650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.687658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.687670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 00:55:55.687676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.687681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.687686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.687692 | orchestrator | 2026-03-05 00:55:55.687697 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-05 00:55:55.687702 | orchestrator | Thursday 05 March 2026 00:50:33 +0000 (0:00:05.892) 0:02:17.907 ******** 2026-03-05 00:55:55.687707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-05 00:55:55.687719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.687728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.687733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.687738 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.687744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-05 00:55:55.687749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.687757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.687766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.687772 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.687780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-05 00:55:55.687786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.687791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.687796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.687805 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.687810 | orchestrator | 2026-03-05 00:55:55.687815 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-05 00:55:55.687821 | orchestrator | Thursday 05 March 2026 00:50:35 +0000 (0:00:02.054) 0:02:19.962 ******** 2026-03-05 00:55:55.687826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-05 00:55:55.687834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-05 00:55:55.687839 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.687845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-05 00:55:55.687850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-05 00:55:55.687855 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.687862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-05 00:55:55.687874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-05 00:55:55.687883 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.687890 | orchestrator | 2026-03-05 00:55:55.687904 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-05 00:55:55.687914 | orchestrator | Thursday 05 March 2026 00:50:37 +0000 (0:00:01.384) 0:02:21.347 ******** 2026-03-05 00:55:55.687921 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.687928 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.687936 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.687943 | orchestrator | 2026-03-05 00:55:55.687951 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-05 00:55:55.687959 | orchestrator | Thursday 05 March 2026 00:50:39 +0000 (0:00:02.022) 0:02:23.369 ******** 2026-03-05 00:55:55.687967 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.687974 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.687982 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.687990 | orchestrator | 2026-03-05 00:55:55.687999 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-05 00:55:55.688025 | orchestrator | Thursday 05 March 2026 00:50:41 +0000 (0:00:02.065) 0:02:25.434 ******** 2026-03-05 00:55:55.688034 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.688043 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.688051 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.688059 | orchestrator | 2026-03-05 00:55:55.688067 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-05 00:55:55.688076 | orchestrator | Thursday 05 March 2026 00:50:42 +0000 (0:00:00.595) 0:02:26.030 ******** 2026-03-05 00:55:55.688084 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.688093 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.688101 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.688109 | orchestrator | 2026-03-05 00:55:55.688118 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-05 00:55:55.688137 | orchestrator | Thursday 05 March 2026 00:50:42 +0000 (0:00:00.396) 0:02:26.427 ******** 2026-03-05 00:55:55.688146 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:55.688154 | orchestrator | 2026-03-05 00:55:55.688163 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-05 00:55:55.688171 | orchestrator | Thursday 05 March 2026 00:50:43 +0000 (0:00:00.976) 0:02:27.403 ******** 2026-03-05 00:55:55.688180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 00:55:55.688194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 00:55:55.688204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.688226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.688643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.688669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 00:55:55.688690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.688699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.688714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 00:55:55.688724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.688732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.688783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.688801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.688810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.688825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 00:55:55.688833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 00:55:55.688841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.688908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.688929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.688939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.688948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.688957 | orchestrator | 2026-03-05 00:55:55.688966 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-05 00:55:55.689078 | orchestrator | Thursday 05 March 2026 00:50:48 +0000 (0:00:04.758) 0:02:32.161 ******** 2026-03-05 00:55:55.689095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 00:55:55.689105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 00:55:55.689484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.689543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.689554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.689564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.689580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.689590 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.689600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 00:55:55.689696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 00:55:55.689716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.689725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.689734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.689776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.689786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.689795 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.689803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 00:55:55.689924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 00:55:55.689939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.690263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.690297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.690350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.690362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.690394 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.690405 | orchestrator | 2026-03-05 00:55:55.690416 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-05 00:55:55.690426 | orchestrator | Thursday 05 March 2026 00:50:49 +0000 (0:00:01.054) 0:02:33.216 ******** 2026-03-05 00:55:55.690436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-05 00:55:55.690447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-05 00:55:55.690457 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.690483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-05 00:55:55.690493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-05 00:55:55.690502 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.690511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-05 00:55:55.690536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-05 00:55:55.690554 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.690564 | orchestrator | 2026-03-05 00:55:55.690573 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-05 00:55:55.690582 | orchestrator | Thursday 05 March 2026 00:50:50 +0000 (0:00:01.154) 0:02:34.370 ******** 2026-03-05 00:55:55.690591 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.690601 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.690610 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.690619 | orchestrator | 2026-03-05 00:55:55.690628 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-05 00:55:55.690637 | orchestrator | Thursday 05 March 2026 00:50:52 +0000 (0:00:02.021) 0:02:36.392 ******** 2026-03-05 00:55:55.690646 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.690655 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.690664 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.690673 | orchestrator | 2026-03-05 00:55:55.690692 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-05 00:55:55.690711 | orchestrator | Thursday 05 March 2026 00:50:54 +0000 (0:00:01.985) 0:02:38.377 ******** 2026-03-05 00:55:55.690720 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.690729 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.690738 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.690746 | orchestrator | 2026-03-05 00:55:55.690755 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-05 00:55:55.690765 | orchestrator | Thursday 05 March 2026 00:50:54 +0000 (0:00:00.621) 0:02:38.998 ******** 2026-03-05 00:55:55.690774 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:55.690784 | orchestrator | 2026-03-05 00:55:55.690793 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-05 00:55:55.690802 | orchestrator | Thursday 05 March 2026 00:50:56 +0000 (0:00:01.203) 0:02:40.202 ******** 2026-03-05 00:55:55.690830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 00:55:55.690854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-05 00:55:55.690872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 00:55:55.690897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-05 00:55:55.690913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 00:55:55.690935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-05 00:55:55.690945 | orchestrator | 2026-03-05 00:55:55.690955 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-05 00:55:55.690964 | orchestrator | Thursday 05 March 2026 00:51:01 +0000 (0:00:05.143) 0:02:45.345 ******** 2026-03-05 00:55:55.690979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-05 00:55:55.691000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-05 00:55:55.691073 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.691085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-05 00:55:55.691107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-05 00:55:55.691118 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.691135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-05 00:55:55.691152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-05 00:55:55.691171 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.691180 | orchestrator | 2026-03-05 00:55:55.691189 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-05 00:55:55.691198 | orchestrator | Thursday 05 March 2026 00:51:05 +0000 (0:00:04.248) 0:02:49.594 ******** 2026-03-05 00:55:55.691208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-05 00:55:55.691223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-05 00:55:55.691233 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.691243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-05 00:55:55.691253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-05 00:55:55.691277 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.691288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-05 00:55:55.691303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-05 00:55:55.691313 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.691323 | orchestrator | 2026-03-05 00:55:55.691332 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-05 00:55:55.691341 | orchestrator | Thursday 05 March 2026 00:51:09 +0000 (0:00:04.120) 0:02:53.715 ******** 2026-03-05 00:55:55.691351 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.691373 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.691383 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.691400 | orchestrator | 2026-03-05 00:55:55.691410 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-05 00:55:55.691418 | orchestrator | Thursday 05 March 2026 00:51:10 +0000 (0:00:01.267) 0:02:54.982 ******** 2026-03-05 00:55:55.691428 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.691436 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.691445 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.691454 | orchestrator | 2026-03-05 00:55:55.691463 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-05 00:55:55.691472 | orchestrator | Thursday 05 March 2026 00:51:13 +0000 (0:00:02.064) 0:02:57.047 ******** 2026-03-05 00:55:55.691481 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.691490 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.691498 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.691507 | orchestrator | 2026-03-05 00:55:55.691516 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-05 00:55:55.691526 | orchestrator | Thursday 05 March 2026 00:51:13 +0000 (0:00:00.590) 0:02:57.638 ******** 2026-03-05 00:55:55.691535 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:55.691544 | orchestrator | 2026-03-05 00:55:55.691553 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-05 00:55:55.691562 | orchestrator | Thursday 05 March 2026 00:51:14 +0000 (0:00:00.935) 0:02:58.573 ******** 2026-03-05 00:55:55.691579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 00:55:55.691590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 00:55:55.691607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 00:55:55.691617 | orchestrator | 2026-03-05 00:55:55.691626 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-05 00:55:55.691635 | orchestrator | Thursday 05 March 2026 00:51:18 +0000 (0:00:04.176) 0:03:02.750 ******** 2026-03-05 00:55:55.691649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-05 00:55:55.691659 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.691668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-05 00:55:55.691677 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.691687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-05 00:55:55.691696 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.691705 | orchestrator | 2026-03-05 00:55:55.691719 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-05 00:55:55.691729 | orchestrator | Thursday 05 March 2026 00:51:19 +0000 (0:00:00.712) 0:03:03.463 ******** 2026-03-05 00:55:55.691745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-05 00:55:55.691754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-05 00:55:55.691764 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.691772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-05 00:55:55.691781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-05 00:55:55.691791 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.691799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-05 00:55:55.691808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-05 00:55:55.691817 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.691827 | orchestrator | 2026-03-05 00:55:55.691835 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-05 00:55:55.691845 | orchestrator | Thursday 05 March 2026 00:51:20 +0000 (0:00:00.697) 0:03:04.160 ******** 2026-03-05 00:55:55.691853 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.691862 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.691871 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.691879 | orchestrator | 2026-03-05 00:55:55.691888 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-05 00:55:55.691897 | orchestrator | Thursday 05 March 2026 00:51:21 +0000 (0:00:01.380) 0:03:05.541 ******** 2026-03-05 00:55:55.691906 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.691915 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.691924 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.691933 | orchestrator | 2026-03-05 00:55:55.691942 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-05 00:55:55.691950 | orchestrator | Thursday 05 March 2026 00:51:23 +0000 (0:00:02.286) 0:03:07.828 ******** 2026-03-05 00:55:55.691959 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.691968 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.691977 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.691985 | orchestrator | 2026-03-05 00:55:55.691999 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-05 00:55:55.692029 | orchestrator | Thursday 05 March 2026 00:51:24 +0000 (0:00:00.647) 0:03:08.475 ******** 2026-03-05 00:55:55.692039 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:55.692047 | orchestrator | 2026-03-05 00:55:55.692056 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-05 00:55:55.692064 | orchestrator | Thursday 05 March 2026 00:51:25 +0000 (0:00:01.121) 0:03:09.597 ******** 2026-03-05 00:55:55.692081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-05 00:55:55.692105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-05 00:55:55.692123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-05 00:55:55.692138 | orchestrator | 2026-03-05 00:55:55.692147 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-05 00:55:55.692156 | orchestrator | Thursday 05 March 2026 00:51:30 +0000 (0:00:04.571) 0:03:14.169 ******** 2026-03-05 00:55:55.692171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-05 00:55:55.692187 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.692203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-05 00:55:55.692214 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.692228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-05 00:55:55.692244 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.692253 | orchestrator | 2026-03-05 00:55:55.692262 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-05 00:55:55.692271 | orchestrator | Thursday 05 March 2026 00:51:31 +0000 (0:00:01.207) 0:03:15.377 ******** 2026-03-05 00:55:55.692286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-05 00:55:55.692297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-05 00:55:55.692306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-05 00:55:55.692316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-05 00:55:55.692325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-05 00:55:55.692334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-05 00:55:55.692346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-05 00:55:55.692355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-05 00:55:55.692364 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.692378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-05 00:55:55.692393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-05 00:55:55.692402 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.692412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-05 00:55:55.692421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-05 00:55:55.692430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-05 00:55:55.692443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-05 00:55:55.692453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-05 00:55:55.692462 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.692470 | orchestrator | 2026-03-05 00:55:55.692479 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-05 00:55:55.692488 | orchestrator | Thursday 05 March 2026 00:51:32 +0000 (0:00:01.223) 0:03:16.601 ******** 2026-03-05 00:55:55.692497 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.692507 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.692515 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.692524 | orchestrator | 2026-03-05 00:55:55.692533 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-05 00:55:55.692542 | orchestrator | Thursday 05 March 2026 00:51:33 +0000 (0:00:01.320) 0:03:17.921 ******** 2026-03-05 00:55:55.692550 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.692559 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.692568 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.692576 | orchestrator | 2026-03-05 00:55:55.692585 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-05 00:55:55.692594 | orchestrator | Thursday 05 March 2026 00:51:36 +0000 (0:00:02.218) 0:03:20.139 ******** 2026-03-05 00:55:55.692603 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.692612 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.692621 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.692630 | orchestrator | 2026-03-05 00:55:55.692639 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-05 00:55:55.692648 | orchestrator | Thursday 05 March 2026 00:51:36 +0000 (0:00:00.298) 0:03:20.438 ******** 2026-03-05 00:55:55.692657 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.692670 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.692684 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.692703 | orchestrator | 2026-03-05 00:55:55.692726 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-05 00:55:55.692740 | orchestrator | Thursday 05 March 2026 00:51:36 +0000 (0:00:00.492) 0:03:20.930 ******** 2026-03-05 00:55:55.692774 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:55.692790 | orchestrator | 2026-03-05 00:55:55.692805 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-05 00:55:55.692820 | orchestrator | Thursday 05 March 2026 00:51:38 +0000 (0:00:01.103) 0:03:22.034 ******** 2026-03-05 00:55:55.692845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 00:55:55.692864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 00:55:55.692882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 00:55:55.692927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 00:55:55.692946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 00:55:55.692970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 00:55:55.692981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 00:55:55.692991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 00:55:55.693031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 00:55:55.693045 | orchestrator | 2026-03-05 00:55:55.693055 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-05 00:55:55.693064 | orchestrator | Thursday 05 March 2026 00:51:42 +0000 (0:00:04.552) 0:03:26.586 ******** 2026-03-05 00:55:55.693076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-05 00:55:55.693108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 00:55:55.693139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 00:55:55.693154 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.693170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-05 00:55:55.693206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 00:55:55.693222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 00:55:55.693243 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.693257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-05 00:55:55.693276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 00:55:55.693294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 00:55:55.693308 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.693326 | orchestrator | 2026-03-05 00:55:55.693344 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-05 00:55:55.693360 | orchestrator | Thursday 05 March 2026 00:51:43 +0000 (0:00:00.786) 0:03:27.372 ******** 2026-03-05 00:55:55.693373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-05 00:55:55.693384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-05 00:55:55.693394 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.693419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-05 00:55:55.693429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-05 00:55:55.693438 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.693454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-05 00:55:55.693464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-05 00:55:55.693472 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.693482 | orchestrator | 2026-03-05 00:55:55.693490 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-05 00:55:55.693499 | orchestrator | Thursday 05 March 2026 00:51:44 +0000 (0:00:01.244) 0:03:28.616 ******** 2026-03-05 00:55:55.693508 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.693517 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.693525 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.693534 | orchestrator | 2026-03-05 00:55:55.693542 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-05 00:55:55.693552 | orchestrator | Thursday 05 March 2026 00:51:46 +0000 (0:00:01.512) 0:03:30.129 ******** 2026-03-05 00:55:55.693560 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.693569 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.693578 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.693586 | orchestrator | 2026-03-05 00:55:55.693595 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-05 00:55:55.693603 | orchestrator | Thursday 05 March 2026 00:51:48 +0000 (0:00:02.234) 0:03:32.364 ******** 2026-03-05 00:55:55.693612 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.693621 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.693629 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.693638 | orchestrator | 2026-03-05 00:55:55.693647 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-05 00:55:55.693655 | orchestrator | Thursday 05 March 2026 00:51:49 +0000 (0:00:00.679) 0:03:33.043 ******** 2026-03-05 00:55:55.693664 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:55.693673 | orchestrator | 2026-03-05 00:55:55.693681 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-05 00:55:55.693690 | orchestrator | Thursday 05 March 2026 00:51:50 +0000 (0:00:01.098) 0:03:34.142 ******** 2026-03-05 00:55:55.693705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 00:55:55.693715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.693746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 00:55:55.693757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.693767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 00:55:55.693780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.693789 | orchestrator | 2026-03-05 00:55:55.693798 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-05 00:55:55.693807 | orchestrator | Thursday 05 March 2026 00:51:54 +0000 (0:00:04.590) 0:03:38.733 ******** 2026-03-05 00:55:55.693816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-05 00:55:55.693846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.693856 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.693866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-05 00:55:55.693879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.693889 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.693898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-05 00:55:55.693912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.693921 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.693930 | orchestrator | 2026-03-05 00:55:55.693953 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-05 00:55:55.693970 | orchestrator | Thursday 05 March 2026 00:51:55 +0000 (0:00:01.264) 0:03:39.997 ******** 2026-03-05 00:55:55.693993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-05 00:55:55.694108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-05 00:55:55.694126 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.694142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-05 00:55:55.694158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-05 00:55:55.694174 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.694191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-05 00:55:55.694207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-05 00:55:55.694224 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.694235 | orchestrator | 2026-03-05 00:55:55.694244 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-05 00:55:55.694265 | orchestrator | Thursday 05 March 2026 00:51:57 +0000 (0:00:01.080) 0:03:41.078 ******** 2026-03-05 00:55:55.694274 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.694283 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.694292 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.694301 | orchestrator | 2026-03-05 00:55:55.694311 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-05 00:55:55.694327 | orchestrator | Thursday 05 March 2026 00:51:58 +0000 (0:00:01.335) 0:03:42.414 ******** 2026-03-05 00:55:55.694351 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.694367 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.694382 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.694398 | orchestrator | 2026-03-05 00:55:55.694413 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-05 00:55:55.694427 | orchestrator | Thursday 05 March 2026 00:52:00 +0000 (0:00:02.406) 0:03:44.820 ******** 2026-03-05 00:55:55.694442 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:55.694456 | orchestrator | 2026-03-05 00:55:55.694496 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-05 00:55:55.694512 | orchestrator | Thursday 05 March 2026 00:52:02 +0000 (0:00:01.415) 0:03:46.235 ******** 2026-03-05 00:55:55.694529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-05 00:55:55.694545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.694586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-05 00:55:55.694603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.694619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.694641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.694667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.694681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-05 00:55:55.694708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.694718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.694728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.694737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.694752 | orchestrator | 2026-03-05 00:55:55.694762 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-05 00:55:55.694771 | orchestrator | Thursday 05 March 2026 00:52:06 +0000 (0:00:04.175) 0:03:50.411 ******** 2026-03-05 00:55:55.694780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-05 00:55:55.694791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.694814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.694825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.694833 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.694865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-05 00:55:55.694885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.694895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.694905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.694914 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.694938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-05 00:55:55.694950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.694959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.694978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.694987 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.694996 | orchestrator | 2026-03-05 00:55:55.695042 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-05 00:55:55.695072 | orchestrator | Thursday 05 March 2026 00:52:07 +0000 (0:00:00.886) 0:03:51.298 ******** 2026-03-05 00:55:55.695088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-05 00:55:55.695103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-05 00:55:55.695119 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.695129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-05 00:55:55.695137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-05 00:55:55.695146 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.695155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-05 00:55:55.695166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-05 00:55:55.695181 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.695207 | orchestrator | 2026-03-05 00:55:55.695221 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-05 00:55:55.695236 | orchestrator | Thursday 05 March 2026 00:52:08 +0000 (0:00:01.686) 0:03:52.985 ******** 2026-03-05 00:55:55.695272 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.695288 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.695303 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.695317 | orchestrator | 2026-03-05 00:55:55.695333 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-05 00:55:55.695349 | orchestrator | Thursday 05 March 2026 00:52:10 +0000 (0:00:01.404) 0:03:54.390 ******** 2026-03-05 00:55:55.695365 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.695381 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.695397 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.695415 | orchestrator | 2026-03-05 00:55:55.695431 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-05 00:55:55.695448 | orchestrator | Thursday 05 March 2026 00:52:12 +0000 (0:00:02.156) 0:03:56.546 ******** 2026-03-05 00:55:55.695465 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:55.695494 | orchestrator | 2026-03-05 00:55:55.695504 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-05 00:55:55.695513 | orchestrator | Thursday 05 March 2026 00:52:13 +0000 (0:00:01.425) 0:03:57.971 ******** 2026-03-05 00:55:55.695523 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-05 00:55:55.695531 | orchestrator | 2026-03-05 00:55:55.695541 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-05 00:55:55.695549 | orchestrator | Thursday 05 March 2026 00:52:17 +0000 (0:00:03.448) 0:04:01.420 ******** 2026-03-05 00:55:55.695568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 00:55:55.695581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-05 00:55:55.695590 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.695619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 00:55:55.695637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-05 00:55:55.695646 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.695660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 00:55:55.695792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-05 00:55:55.695815 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.695824 | orchestrator | 2026-03-05 00:55:55.695833 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-05 00:55:55.695842 | orchestrator | Thursday 05 March 2026 00:52:20 +0000 (0:00:02.873) 0:04:04.294 ******** 2026-03-05 00:55:55.695851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 00:55:55.695871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-05 00:55:55.695880 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.695906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 00:55:55.695925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-05 00:55:55.695934 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.695948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 00:55:55.695959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-05 00:55:55.695967 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.695976 | orchestrator | 2026-03-05 00:55:55.695985 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-05 00:55:55.695994 | orchestrator | Thursday 05 March 2026 00:52:22 +0000 (0:00:02.636) 0:04:06.931 ******** 2026-03-05 00:55:55.696057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-05 00:55:55.696071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-05 00:55:55.696080 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.696089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-05 00:55:55.696099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-05 00:55:55.696113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-05 00:55:55.696122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-05 00:55:55.696131 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.696140 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.696149 | orchestrator | 2026-03-05 00:55:55.696158 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-05 00:55:55.696173 | orchestrator | Thursday 05 March 2026 00:52:26 +0000 (0:00:03.416) 0:04:10.347 ******** 2026-03-05 00:55:55.696182 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.696191 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.696200 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.696208 | orchestrator | 2026-03-05 00:55:55.696217 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-05 00:55:55.696226 | orchestrator | Thursday 05 March 2026 00:52:28 +0000 (0:00:02.045) 0:04:12.393 ******** 2026-03-05 00:55:55.696234 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.696243 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.696251 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.696260 | orchestrator | 2026-03-05 00:55:55.696268 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-05 00:55:55.696277 | orchestrator | Thursday 05 March 2026 00:52:30 +0000 (0:00:01.702) 0:04:14.095 ******** 2026-03-05 00:55:55.696291 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.696300 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.696309 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.696318 | orchestrator | 2026-03-05 00:55:55.696326 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-05 00:55:55.696335 | orchestrator | Thursday 05 March 2026 00:52:30 +0000 (0:00:00.336) 0:04:14.431 ******** 2026-03-05 00:55:55.696344 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:55.696352 | orchestrator | 2026-03-05 00:55:55.696361 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-05 00:55:55.696370 | orchestrator | Thursday 05 March 2026 00:52:31 +0000 (0:00:01.500) 0:04:15.932 ******** 2026-03-05 00:55:55.696379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-05 00:55:55.696389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-05 00:55:55.696404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-05 00:55:55.696420 | orchestrator | 2026-03-05 00:55:55.696429 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-05 00:55:55.696437 | orchestrator | Thursday 05 March 2026 00:52:33 +0000 (0:00:01.509) 0:04:17.442 ******** 2026-03-05 00:55:55.696447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-05 00:55:55.696471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-05 00:55:55.696482 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.696490 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.696500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-05 00:55:55.696509 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.696524 | orchestrator | 2026-03-05 00:55:55.696540 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-05 00:55:55.696555 | orchestrator | Thursday 05 March 2026 00:52:33 +0000 (0:00:00.491) 0:04:17.933 ******** 2026-03-05 00:55:55.696571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-05 00:55:55.696588 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.696603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-05 00:55:55.696619 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.696636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-05 00:55:55.696670 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.696688 | orchestrator | 2026-03-05 00:55:55.696705 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-05 00:55:55.696717 | orchestrator | Thursday 05 March 2026 00:52:35 +0000 (0:00:01.113) 0:04:19.047 ******** 2026-03-05 00:55:55.696726 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.696735 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.696743 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.696752 | orchestrator | 2026-03-05 00:55:55.696760 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-05 00:55:55.696769 | orchestrator | Thursday 05 March 2026 00:52:35 +0000 (0:00:00.511) 0:04:19.558 ******** 2026-03-05 00:55:55.696778 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.696787 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.696795 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.696803 | orchestrator | 2026-03-05 00:55:55.696812 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-05 00:55:55.696821 | orchestrator | Thursday 05 March 2026 00:52:37 +0000 (0:00:01.564) 0:04:21.123 ******** 2026-03-05 00:55:55.696829 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.696838 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.696846 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.696855 | orchestrator | 2026-03-05 00:55:55.696864 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-05 00:55:55.696872 | orchestrator | Thursday 05 March 2026 00:52:37 +0000 (0:00:00.317) 0:04:21.440 ******** 2026-03-05 00:55:55.696881 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:55.696889 | orchestrator | 2026-03-05 00:55:55.696898 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-05 00:55:55.696907 | orchestrator | Thursday 05 March 2026 00:52:39 +0000 (0:00:01.751) 0:04:23.192 ******** 2026-03-05 00:55:55.696935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 00:55:55.696946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.696956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 00:55:55.696977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.696987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.697059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.697074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.697084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-05 00:55:55.697105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.697115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.697124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-05 00:55:55.697149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:55:55.697161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.697171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:55:55.697186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.697204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 00:55:55.697213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.697223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-05 00:55:55.697244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:55:55.697254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.697271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-05 00:55:55.697285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-05 00:55:55.697295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:55:55.697304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:55:55.697326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.697336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 00:55:55.697351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.697360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-05 00:55:55.697374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:55:55.697383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.697405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-05 00:55:55.697416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 00:55:55.697435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-05 00:55:55.697534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.697551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.697564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.697600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-05 00:55:55.697627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.697640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:55:55.697656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:55:55.697676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.697695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 00:55:55.697733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.697748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-05 00:55:55.697771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:55:55.697786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.697807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-05 00:55:55.697823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-05 00:55:55.697844 | orchestrator | 2026-03-05 00:55:55.697858 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-05 00:55:55.697873 | orchestrator | Thursday 05 March 2026 00:52:44 +0000 (0:00:05.317) 0:04:28.510 ******** 2026-03-05 00:55:55.697908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 00:55:55.697934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.697949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.697971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.697987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-05 00:55:55.698091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.698144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:55:55.698162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:55:55.698176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.698192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 00:55:55.698214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.698230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-05 00:55:55.698246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:55:55.698293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.698309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-05 00:55:55.698329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 00:55:55.698344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-05 00:55:55.698357 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.698372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.698418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.698433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.698448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-05 00:55:55.698468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.698483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 00:55:55.698507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:55:55.698540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.698555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:55:55.698570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.698591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.698606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.698629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-05 00:55:55.698661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 00:55:55.698676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.698691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.698712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:55:55.698728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-05 00:55:55.698742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:55:55.698766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:55:55.698799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.698815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.698830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 00:55:55.698854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-05 00:55:55.698880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.698913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-05 00:55:55.698930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-05 00:55:55.698945 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.698959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:55:55.698973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.698994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-05 00:55:55.699048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-05 00:55:55.699064 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.699078 | orchestrator | 2026-03-05 00:55:55.699093 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-05 00:55:55.699109 | orchestrator | Thursday 05 March 2026 00:52:46 +0000 (0:00:02.169) 0:04:30.680 ******** 2026-03-05 00:55:55.699123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-05 00:55:55.699133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-05 00:55:55.699141 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.699166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-05 00:55:55.699175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-05 00:55:55.699183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-05 00:55:55.699191 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.699199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-05 00:55:55.699207 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.699215 | orchestrator | 2026-03-05 00:55:55.699222 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-05 00:55:55.699231 | orchestrator | Thursday 05 March 2026 00:52:49 +0000 (0:00:02.341) 0:04:33.021 ******** 2026-03-05 00:55:55.699247 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.699262 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.699272 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.699280 | orchestrator | 2026-03-05 00:55:55.699288 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-05 00:55:55.699296 | orchestrator | Thursday 05 March 2026 00:52:50 +0000 (0:00:01.197) 0:04:34.219 ******** 2026-03-05 00:55:55.699304 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.699312 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.699319 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.699327 | orchestrator | 2026-03-05 00:55:55.699335 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-05 00:55:55.699343 | orchestrator | Thursday 05 March 2026 00:52:52 +0000 (0:00:02.201) 0:04:36.420 ******** 2026-03-05 00:55:55.699351 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:55.699359 | orchestrator | 2026-03-05 00:55:55.699366 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-05 00:55:55.699381 | orchestrator | Thursday 05 March 2026 00:52:53 +0000 (0:00:01.450) 0:04:37.871 ******** 2026-03-05 00:55:55.699396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 00:55:55.699412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 00:55:55.699446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 00:55:55.699461 | orchestrator | 2026-03-05 00:55:55.699473 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-05 00:55:55.699486 | orchestrator | Thursday 05 March 2026 00:52:58 +0000 (0:00:04.453) 0:04:42.324 ******** 2026-03-05 00:55:55.699499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-05 00:55:55.699527 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.699541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-05 00:55:55.699553 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.699566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-05 00:55:55.699580 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.699593 | orchestrator | 2026-03-05 00:55:55.699606 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-05 00:55:55.699620 | orchestrator | Thursday 05 March 2026 00:52:58 +0000 (0:00:00.645) 0:04:42.970 ******** 2026-03-05 00:55:55.699632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-05 00:55:55.699642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-05 00:55:55.699669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-05 00:55:55.699678 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.699765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-05 00:55:55.699793 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.699801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-05 00:55:55.699809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-05 00:55:55.699817 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.699825 | orchestrator | 2026-03-05 00:55:55.699841 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-05 00:55:55.699850 | orchestrator | Thursday 05 March 2026 00:52:59 +0000 (0:00:00.952) 0:04:43.923 ******** 2026-03-05 00:55:55.699858 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.699866 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.699873 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.699881 | orchestrator | 2026-03-05 00:55:55.699889 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-05 00:55:55.699897 | orchestrator | Thursday 05 March 2026 00:53:02 +0000 (0:00:02.225) 0:04:46.149 ******** 2026-03-05 00:55:55.699905 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.699912 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.699920 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.699928 | orchestrator | 2026-03-05 00:55:55.699936 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-05 00:55:55.699944 | orchestrator | Thursday 05 March 2026 00:53:04 +0000 (0:00:02.045) 0:04:48.194 ******** 2026-03-05 00:55:55.699952 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:55.699960 | orchestrator | 2026-03-05 00:55:55.699968 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-05 00:55:55.699976 | orchestrator | Thursday 05 March 2026 00:53:05 +0000 (0:00:01.759) 0:04:49.954 ******** 2026-03-05 00:55:55.699989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 00:55:55.700000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.700093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.700104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 00:55:55.700121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.700134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.700144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 00:55:55.700167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.700182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.700191 | orchestrator | 2026-03-05 00:55:55.700199 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-05 00:55:55.700208 | orchestrator | Thursday 05 March 2026 00:53:12 +0000 (0:00:06.174) 0:04:56.129 ******** 2026-03-05 00:55:55.700221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-05 00:55:55.700230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.700238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.700246 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.700269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-05 00:55:55.700291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.700302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.700315 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.700328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-05 00:55:55.700337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.700360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.700375 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.700383 | orchestrator | 2026-03-05 00:55:55.700391 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-05 00:55:55.700399 | orchestrator | Thursday 05 March 2026 00:53:13 +0000 (0:00:01.524) 0:04:57.653 ******** 2026-03-05 00:55:55.700407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-05 00:55:55.700417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-05 00:55:55.700425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-05 00:55:55.700433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-05 00:55:55.700441 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.700449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-05 00:55:55.700458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-05 00:55:55.700466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-05 00:55:55.700475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-05 00:55:55.700487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-05 00:55:55.700495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-05 00:55:55.700504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-05 00:55:55.700512 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.700520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-05 00:55:55.700528 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.700536 | orchestrator | 2026-03-05 00:55:55.700544 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-05 00:55:55.700557 | orchestrator | Thursday 05 March 2026 00:53:14 +0000 (0:00:01.220) 0:04:58.873 ******** 2026-03-05 00:55:55.700566 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.700573 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.700581 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.700589 | orchestrator | 2026-03-05 00:55:55.700595 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-05 00:55:55.700602 | orchestrator | Thursday 05 March 2026 00:53:16 +0000 (0:00:01.672) 0:05:00.546 ******** 2026-03-05 00:55:55.700609 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.700617 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.700629 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.700639 | orchestrator | 2026-03-05 00:55:55.700650 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-05 00:55:55.700661 | orchestrator | Thursday 05 March 2026 00:53:18 +0000 (0:00:02.458) 0:05:03.004 ******** 2026-03-05 00:55:55.700672 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:55.700682 | orchestrator | 2026-03-05 00:55:55.700692 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-05 00:55:55.700717 | orchestrator | Thursday 05 March 2026 00:53:20 +0000 (0:00:01.988) 0:05:04.993 ******** 2026-03-05 00:55:55.700730 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-05 00:55:55.700742 | orchestrator | 2026-03-05 00:55:55.700753 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-05 00:55:55.700764 | orchestrator | Thursday 05 March 2026 00:53:21 +0000 (0:00:01.013) 0:05:06.006 ******** 2026-03-05 00:55:55.700776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-05 00:55:55.700788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-05 00:55:55.700799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-05 00:55:55.700810 | orchestrator | 2026-03-05 00:55:55.700821 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-05 00:55:55.700833 | orchestrator | Thursday 05 March 2026 00:53:27 +0000 (0:00:05.397) 0:05:11.404 ******** 2026-03-05 00:55:55.700849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-05 00:55:55.700869 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.700882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-05 00:55:55.700893 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.700904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-05 00:55:55.700917 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.700928 | orchestrator | 2026-03-05 00:55:55.700939 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-05 00:55:55.700950 | orchestrator | Thursday 05 March 2026 00:53:28 +0000 (0:00:01.486) 0:05:12.890 ******** 2026-03-05 00:55:55.700982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-05 00:55:55.700996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-05 00:55:55.701029 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.701041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-05 00:55:55.701052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-05 00:55:55.701059 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.701065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-05 00:55:55.701073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-05 00:55:55.701079 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.701086 | orchestrator | 2026-03-05 00:55:55.701093 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-05 00:55:55.701100 | orchestrator | Thursday 05 March 2026 00:53:30 +0000 (0:00:01.985) 0:05:14.876 ******** 2026-03-05 00:55:55.701106 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.701113 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.701120 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.701127 | orchestrator | 2026-03-05 00:55:55.701133 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-05 00:55:55.701151 | orchestrator | Thursday 05 March 2026 00:53:33 +0000 (0:00:02.904) 0:05:17.780 ******** 2026-03-05 00:55:55.701157 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.701164 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.701170 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.701177 | orchestrator | 2026-03-05 00:55:55.701184 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-05 00:55:55.701191 | orchestrator | Thursday 05 March 2026 00:53:37 +0000 (0:00:03.662) 0:05:21.443 ******** 2026-03-05 00:55:55.701198 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-05 00:55:55.701205 | orchestrator | 2026-03-05 00:55:55.701216 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-05 00:55:55.701223 | orchestrator | Thursday 05 March 2026 00:53:39 +0000 (0:00:01.814) 0:05:23.257 ******** 2026-03-05 00:55:55.701230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-05 00:55:55.701238 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.701245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-05 00:55:55.701252 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.701271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-05 00:55:55.701279 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.701285 | orchestrator | 2026-03-05 00:55:55.701292 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-05 00:55:55.701299 | orchestrator | Thursday 05 March 2026 00:53:41 +0000 (0:00:01.848) 0:05:25.106 ******** 2026-03-05 00:55:55.701306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-05 00:55:55.701313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-05 00:55:55.701325 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.701332 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.701338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-05 00:55:55.701345 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.701352 | orchestrator | 2026-03-05 00:55:55.701359 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-05 00:55:55.701366 | orchestrator | Thursday 05 March 2026 00:53:43 +0000 (0:00:01.990) 0:05:27.097 ******** 2026-03-05 00:55:55.701372 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.701379 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.701386 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.701392 | orchestrator | 2026-03-05 00:55:55.701403 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-05 00:55:55.701465 | orchestrator | Thursday 05 March 2026 00:53:45 +0000 (0:00:02.352) 0:05:29.449 ******** 2026-03-05 00:55:55.701472 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:55.701480 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:55.701486 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:55.701493 | orchestrator | 2026-03-05 00:55:55.701500 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-05 00:55:55.701507 | orchestrator | Thursday 05 March 2026 00:53:47 +0000 (0:00:02.355) 0:05:31.805 ******** 2026-03-05 00:55:55.701513 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:55.701520 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:55.701527 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:55.701533 | orchestrator | 2026-03-05 00:55:55.701540 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-05 00:55:55.701547 | orchestrator | Thursday 05 March 2026 00:53:51 +0000 (0:00:03.240) 0:05:35.046 ******** 2026-03-05 00:55:55.701553 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-05 00:55:55.701561 | orchestrator | 2026-03-05 00:55:55.701567 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-05 00:55:55.701574 | orchestrator | Thursday 05 March 2026 00:53:51 +0000 (0:00:00.933) 0:05:35.979 ******** 2026-03-05 00:55:55.701581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-05 00:55:55.701588 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.701609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-05 00:55:55.701622 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.701629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-05 00:55:55.701636 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.701642 | orchestrator | 2026-03-05 00:55:55.701649 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-05 00:55:55.701656 | orchestrator | Thursday 05 March 2026 00:53:53 +0000 (0:00:01.420) 0:05:37.399 ******** 2026-03-05 00:55:55.701663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-05 00:55:55.701670 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.701676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-05 00:55:55.701683 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.701694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-05 00:55:55.701701 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.701708 | orchestrator | 2026-03-05 00:55:55.701715 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-05 00:55:55.701722 | orchestrator | Thursday 05 March 2026 00:53:55 +0000 (0:00:01.779) 0:05:39.178 ******** 2026-03-05 00:55:55.701728 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.701735 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.701741 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.701748 | orchestrator | 2026-03-05 00:55:55.701755 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-05 00:55:55.701761 | orchestrator | Thursday 05 March 2026 00:53:56 +0000 (0:00:01.349) 0:05:40.528 ******** 2026-03-05 00:55:55.701768 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:55.701777 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:55.701788 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:55.701804 | orchestrator | 2026-03-05 00:55:55.701818 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-05 00:55:55.701828 | orchestrator | Thursday 05 March 2026 00:53:59 +0000 (0:00:02.590) 0:05:43.119 ******** 2026-03-05 00:55:55.701853 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:55.701864 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:55.701876 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:55.701886 | orchestrator | 2026-03-05 00:55:55.701897 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-05 00:55:55.701908 | orchestrator | Thursday 05 March 2026 00:54:02 +0000 (0:00:03.589) 0:05:46.708 ******** 2026-03-05 00:55:55.701919 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:55.701930 | orchestrator | 2026-03-05 00:55:55.701942 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-05 00:55:55.701953 | orchestrator | Thursday 05 March 2026 00:54:04 +0000 (0:00:01.788) 0:05:48.496 ******** 2026-03-05 00:55:55.701983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 00:55:55.701991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 00:55:55.701999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 00:55:55.702061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 00:55:55.702070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 00:55:55.702084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.702104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 00:55:55.702112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 00:55:55.702119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 00:55:55.702126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.702137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 00:55:55.702149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 00:55:55.702169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 00:55:55.702177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 00:55:55.702184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.702191 | orchestrator | 2026-03-05 00:55:55.702197 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-05 00:55:55.702204 | orchestrator | Thursday 05 March 2026 00:54:08 +0000 (0:00:04.168) 0:05:52.664 ******** 2026-03-05 00:55:55.702215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-05 00:55:55.702232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 00:55:55.702239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 00:55:55.702259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 00:55:55.702267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.702274 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.702281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-05 00:55:55.702288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 00:55:55.702299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 00:55:55.702310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 00:55:55.702317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.702335 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.702342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-05 00:55:55.702349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 00:55:55.702356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 00:55:55.702369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 00:55:55.702381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:55:55.702388 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.702394 | orchestrator | 2026-03-05 00:55:55.702401 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-05 00:55:55.702408 | orchestrator | Thursday 05 March 2026 00:54:09 +0000 (0:00:00.909) 0:05:53.574 ******** 2026-03-05 00:55:55.702415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-05 00:55:55.702422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-05 00:55:55.702430 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.702448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-05 00:55:55.702455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-05 00:55:55.702462 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.702469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-05 00:55:55.702475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-05 00:55:55.702482 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.702489 | orchestrator | 2026-03-05 00:55:55.702495 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-05 00:55:55.702502 | orchestrator | Thursday 05 March 2026 00:54:11 +0000 (0:00:01.882) 0:05:55.457 ******** 2026-03-05 00:55:55.702509 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.702515 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.702521 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.702528 | orchestrator | 2026-03-05 00:55:55.702535 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-05 00:55:55.702541 | orchestrator | Thursday 05 March 2026 00:54:12 +0000 (0:00:01.486) 0:05:56.943 ******** 2026-03-05 00:55:55.702548 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.702556 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.702568 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.702576 | orchestrator | 2026-03-05 00:55:55.702588 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-05 00:55:55.702595 | orchestrator | Thursday 05 March 2026 00:54:15 +0000 (0:00:02.325) 0:05:59.269 ******** 2026-03-05 00:55:55.702601 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:55.702608 | orchestrator | 2026-03-05 00:55:55.702614 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-05 00:55:55.702621 | orchestrator | Thursday 05 March 2026 00:54:17 +0000 (0:00:01.884) 0:06:01.153 ******** 2026-03-05 00:55:55.702632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 00:55:55.702640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 00:55:55.702659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 00:55:55.702668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 00:55:55.702681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 00:55:55.702693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 00:55:55.702700 | orchestrator | 2026-03-05 00:55:55.702707 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-05 00:55:55.702714 | orchestrator | Thursday 05 March 2026 00:54:23 +0000 (0:00:06.002) 0:06:07.155 ******** 2026-03-05 00:55:55.702732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-05 00:55:55.702740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-05 00:55:55.702752 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.702759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-05 00:55:55.702770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-05 00:55:55.702778 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.702795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-05 00:55:55.702803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-05 00:55:55.702815 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.702821 | orchestrator | 2026-03-05 00:55:55.702896 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-05 00:55:55.702908 | orchestrator | Thursday 05 March 2026 00:54:23 +0000 (0:00:00.757) 0:06:07.913 ******** 2026-03-05 00:55:55.702916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-05 00:55:55.702923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-05 00:55:55.702931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-05 00:55:55.702939 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.702945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-05 00:55:55.702957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-05 00:55:55.702964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-05 00:55:55.702971 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.702978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-05 00:55:55.702985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-05 00:55:55.702992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-05 00:55:55.702998 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.703022 | orchestrator | 2026-03-05 00:55:55.703029 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-05 00:55:55.703036 | orchestrator | Thursday 05 March 2026 00:54:24 +0000 (0:00:01.087) 0:06:09.000 ******** 2026-03-05 00:55:55.703043 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.703049 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.703056 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.703062 | orchestrator | 2026-03-05 00:55:55.703069 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-05 00:55:55.703076 | orchestrator | Thursday 05 March 2026 00:54:25 +0000 (0:00:00.931) 0:06:09.932 ******** 2026-03-05 00:55:55.703082 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.703089 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.703102 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.703108 | orchestrator | 2026-03-05 00:55:55.703127 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-05 00:55:55.703135 | orchestrator | Thursday 05 March 2026 00:54:27 +0000 (0:00:01.542) 0:06:11.474 ******** 2026-03-05 00:55:55.703141 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:55.703148 | orchestrator | 2026-03-05 00:55:55.703155 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-05 00:55:55.703162 | orchestrator | Thursday 05 March 2026 00:54:29 +0000 (0:00:01.590) 0:06:13.065 ******** 2026-03-05 00:55:55.703169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-05 00:55:55.703177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 00:55:55.703188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-05 00:55:55.703197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:55:55.703204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 00:55:55.703211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:55:55.703240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:55:55.703249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 00:55:55.703256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:55:55.703263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-05 00:55:55.703274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 00:55:55.703281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 00:55:55.703288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:55:55.703312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:55:55.703320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 00:55:55.703327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-05 00:55:55.703341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-05 00:55:55.703348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:55:55.703356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-05 00:55:55.703371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:55:55.703379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-05 00:55:55.703386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-05 00:55:55.703396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-05 00:55:55.703406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:55:55.703431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-05 00:55:55.703439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:55:55.703446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:55:55.703453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-05 00:55:55.703460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:55:55.703471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-05 00:55:55.703478 | orchestrator | 2026-03-05 00:55:55.703484 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-05 00:55:55.703491 | orchestrator | Thursday 05 March 2026 00:54:34 +0000 (0:00:05.378) 0:06:18.444 ******** 2026-03-05 00:55:55.703498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-05 00:55:55.703510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 00:55:55.703522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:55:55.703529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:55:55.703536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 00:55:55.703547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-05 00:55:55.703555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-05 00:55:55.703566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:55:55.703577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:55:55.703585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-05 00:55:55.703592 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.703599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-05 00:55:55.703606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 00:55:55.703616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:55:55.703628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:55:55.703665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 00:55:55.703678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-05 00:55:55.703686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-05 00:55:55.703694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:55:55.703701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:55:55.703716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-05 00:55:55.703724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-05 00:55:55.703731 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.703741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 00:55:55.703749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:55:55.703762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:55:55.703774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 00:55:55.703797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-05 00:55:55.703820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-05 00:55:55.703833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:55:55.703851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:55:55.703863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-05 00:55:55.703876 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.703889 | orchestrator | 2026-03-05 00:55:55.703897 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-05 00:55:55.703904 | orchestrator | Thursday 05 March 2026 00:54:35 +0000 (0:00:01.350) 0:06:19.794 ******** 2026-03-05 00:55:55.703911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-05 00:55:55.703918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-05 00:55:55.703925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-05 00:55:55.703938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-05 00:55:55.703946 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.703953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-05 00:55:55.703966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-05 00:55:55.703974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-05 00:55:55.703981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-05 00:55:55.703987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-05 00:55:55.703994 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.704001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-05 00:55:55.704029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-05 00:55:55.704046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-05 00:55:55.704055 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.704066 | orchestrator | 2026-03-05 00:55:55.704077 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-05 00:55:55.704087 | orchestrator | Thursday 05 March 2026 00:54:36 +0000 (0:00:01.150) 0:06:20.944 ******** 2026-03-05 00:55:55.704098 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.704108 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.704119 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.704130 | orchestrator | 2026-03-05 00:55:55.704141 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-05 00:55:55.704151 | orchestrator | Thursday 05 March 2026 00:54:37 +0000 (0:00:00.529) 0:06:21.474 ******** 2026-03-05 00:55:55.704162 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.704173 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.704185 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.704196 | orchestrator | 2026-03-05 00:55:55.704207 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-05 00:55:55.704216 | orchestrator | Thursday 05 March 2026 00:54:39 +0000 (0:00:01.672) 0:06:23.146 ******** 2026-03-05 00:55:55.704223 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:55.704238 | orchestrator | 2026-03-05 00:55:55.704244 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-05 00:55:55.704251 | orchestrator | Thursday 05 March 2026 00:54:41 +0000 (0:00:01.902) 0:06:25.049 ******** 2026-03-05 00:55:55.704262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-05 00:55:55.704277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-05 00:55:55.704285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-05 00:55:55.704292 | orchestrator | 2026-03-05 00:55:55.704304 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-05 00:55:55.704311 | orchestrator | Thursday 05 March 2026 00:54:43 +0000 (0:00:02.936) 0:06:27.986 ******** 2026-03-05 00:55:55.704318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-05 00:55:55.704395 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.704409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-05 00:55:55.704422 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.704434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-05 00:55:55.704446 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.704458 | orchestrator | 2026-03-05 00:55:55.704467 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-05 00:55:55.704474 | orchestrator | Thursday 05 March 2026 00:54:44 +0000 (0:00:00.908) 0:06:28.894 ******** 2026-03-05 00:55:55.704481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-05 00:55:55.704512 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.704520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-05 00:55:55.704527 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.704533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-05 00:55:55.704540 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.704547 | orchestrator | 2026-03-05 00:55:55.704554 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-05 00:55:55.704561 | orchestrator | Thursday 05 March 2026 00:54:45 +0000 (0:00:00.834) 0:06:29.729 ******** 2026-03-05 00:55:55.704573 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.704580 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.704598 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.704605 | orchestrator | 2026-03-05 00:55:55.704670 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-05 00:55:55.704688 | orchestrator | Thursday 05 March 2026 00:54:46 +0000 (0:00:00.458) 0:06:30.188 ******** 2026-03-05 00:55:55.704695 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.704702 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.704708 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.704715 | orchestrator | 2026-03-05 00:55:55.704722 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-05 00:55:55.704729 | orchestrator | Thursday 05 March 2026 00:54:47 +0000 (0:00:01.573) 0:06:31.761 ******** 2026-03-05 00:55:55.704735 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:55.704742 | orchestrator | 2026-03-05 00:55:55.704749 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-05 00:55:55.704755 | orchestrator | Thursday 05 March 2026 00:54:49 +0000 (0:00:01.904) 0:06:33.665 ******** 2026-03-05 00:55:55.704763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-05 00:55:55.704775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-05 00:55:55.704783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-05 00:55:55.704796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-05 00:55:55.704809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-05 00:55:55.704816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-05 00:55:55.704823 | orchestrator | 2026-03-05 00:55:55.704830 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-05 00:55:55.704840 | orchestrator | Thursday 05 March 2026 00:54:56 +0000 (0:00:06.509) 0:06:40.176 ******** 2026-03-05 00:55:55.704848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-05 00:55:55.704859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-05 00:55:55.704872 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.704878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-05 00:55:55.704885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-05 00:55:55.704892 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.704902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-05 00:55:55.704910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-05 00:55:55.704921 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.704927 | orchestrator | 2026-03-05 00:55:55.704934 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-05 00:55:55.704944 | orchestrator | Thursday 05 March 2026 00:54:56 +0000 (0:00:00.732) 0:06:40.908 ******** 2026-03-05 00:55:55.704952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-05 00:55:55.704959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-05 00:55:55.704966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-05 00:55:55.704973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-05 00:55:55.704980 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.704986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-05 00:55:55.704993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-05 00:55:55.705000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-05 00:55:55.705063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-05 00:55:55.705071 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.705078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-05 00:55:55.705089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-05 00:55:55.705096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-05 00:55:55.705103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-05 00:55:55.705115 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.705122 | orchestrator | 2026-03-05 00:55:55.705129 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-05 00:55:55.705135 | orchestrator | Thursday 05 March 2026 00:54:58 +0000 (0:00:01.884) 0:06:42.792 ******** 2026-03-05 00:55:55.705142 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.705149 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.705155 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.705162 | orchestrator | 2026-03-05 00:55:55.705169 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-05 00:55:55.705181 | orchestrator | Thursday 05 March 2026 00:55:00 +0000 (0:00:01.379) 0:06:44.171 ******** 2026-03-05 00:55:55.705192 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.705202 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.705213 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.705223 | orchestrator | 2026-03-05 00:55:55.705234 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-05 00:55:55.705245 | orchestrator | Thursday 05 March 2026 00:55:02 +0000 (0:00:02.588) 0:06:46.760 ******** 2026-03-05 00:55:55.705255 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.705265 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.705275 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.705285 | orchestrator | 2026-03-05 00:55:55.705297 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-05 00:55:55.705307 | orchestrator | Thursday 05 March 2026 00:55:03 +0000 (0:00:00.386) 0:06:47.147 ******** 2026-03-05 00:55:55.705319 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.705330 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.705341 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.705352 | orchestrator | 2026-03-05 00:55:55.705363 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-05 00:55:55.705380 | orchestrator | Thursday 05 March 2026 00:55:03 +0000 (0:00:00.358) 0:06:47.506 ******** 2026-03-05 00:55:55.705388 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.705394 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.705401 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.705407 | orchestrator | 2026-03-05 00:55:55.705414 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-05 00:55:55.705421 | orchestrator | Thursday 05 March 2026 00:55:04 +0000 (0:00:00.834) 0:06:48.340 ******** 2026-03-05 00:55:55.705427 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.705434 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.705440 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.705447 | orchestrator | 2026-03-05 00:55:55.705454 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-05 00:55:55.705460 | orchestrator | Thursday 05 March 2026 00:55:04 +0000 (0:00:00.400) 0:06:48.741 ******** 2026-03-05 00:55:55.705467 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.705473 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.705480 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.705486 | orchestrator | 2026-03-05 00:55:55.705493 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-05 00:55:55.705500 | orchestrator | Thursday 05 March 2026 00:55:05 +0000 (0:00:00.369) 0:06:49.110 ******** 2026-03-05 00:55:55.705506 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.705513 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.705519 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.705526 | orchestrator | 2026-03-05 00:55:55.705532 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-05 00:55:55.705538 | orchestrator | Thursday 05 March 2026 00:55:06 +0000 (0:00:01.049) 0:06:50.160 ******** 2026-03-05 00:55:55.705545 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:55.705551 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:55.705566 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:55.705576 | orchestrator | 2026-03-05 00:55:55.705586 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-05 00:55:55.705597 | orchestrator | Thursday 05 March 2026 00:55:06 +0000 (0:00:00.733) 0:06:50.893 ******** 2026-03-05 00:55:55.705606 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:55.705616 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:55.705627 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:55.705637 | orchestrator | 2026-03-05 00:55:55.705647 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-05 00:55:55.705658 | orchestrator | Thursday 05 March 2026 00:55:07 +0000 (0:00:00.433) 0:06:51.326 ******** 2026-03-05 00:55:55.705669 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:55.705677 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:55.705684 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:55.705690 | orchestrator | 2026-03-05 00:55:55.705696 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-05 00:55:55.705702 | orchestrator | Thursday 05 March 2026 00:55:08 +0000 (0:00:00.904) 0:06:52.231 ******** 2026-03-05 00:55:55.705708 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:55.705715 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:55.705721 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:55.705727 | orchestrator | 2026-03-05 00:55:55.705733 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-05 00:55:55.705739 | orchestrator | Thursday 05 March 2026 00:55:09 +0000 (0:00:01.355) 0:06:53.587 ******** 2026-03-05 00:55:55.705746 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:55.705752 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:55.705758 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:55.705764 | orchestrator | 2026-03-05 00:55:55.705775 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-05 00:55:55.705782 | orchestrator | Thursday 05 March 2026 00:55:10 +0000 (0:00:00.968) 0:06:54.556 ******** 2026-03-05 00:55:55.705788 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.705794 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.705800 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.705806 | orchestrator | 2026-03-05 00:55:55.705812 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-05 00:55:55.705818 | orchestrator | Thursday 05 March 2026 00:55:19 +0000 (0:00:09.207) 0:07:03.764 ******** 2026-03-05 00:55:55.705824 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:55.705831 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:55.705837 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:55.705843 | orchestrator | 2026-03-05 00:55:55.705849 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-05 00:55:55.705855 | orchestrator | Thursday 05 March 2026 00:55:20 +0000 (0:00:00.793) 0:07:04.558 ******** 2026-03-05 00:55:55.705861 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.705867 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.705873 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.705879 | orchestrator | 2026-03-05 00:55:55.705886 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-05 00:55:55.705892 | orchestrator | Thursday 05 March 2026 00:55:32 +0000 (0:00:11.576) 0:07:16.134 ******** 2026-03-05 00:55:55.705898 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:55.705904 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:55.705910 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:55.705916 | orchestrator | 2026-03-05 00:55:55.705922 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-05 00:55:55.705929 | orchestrator | Thursday 05 March 2026 00:55:36 +0000 (0:00:04.809) 0:07:20.943 ******** 2026-03-05 00:55:55.705935 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:55.705941 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:55.705947 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:55.705953 | orchestrator | 2026-03-05 00:55:55.705959 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-05 00:55:55.705970 | orchestrator | Thursday 05 March 2026 00:55:47 +0000 (0:00:10.465) 0:07:31.409 ******** 2026-03-05 00:55:55.705977 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.705983 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.705989 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.705995 | orchestrator | 2026-03-05 00:55:55.706001 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-05 00:55:55.706051 | orchestrator | Thursday 05 March 2026 00:55:47 +0000 (0:00:00.386) 0:07:31.796 ******** 2026-03-05 00:55:55.706058 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.706070 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.706076 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.706082 | orchestrator | 2026-03-05 00:55:55.706088 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-05 00:55:55.706095 | orchestrator | Thursday 05 March 2026 00:55:48 +0000 (0:00:00.727) 0:07:32.523 ******** 2026-03-05 00:55:55.706101 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.706107 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.706113 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.706119 | orchestrator | 2026-03-05 00:55:55.706125 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-05 00:55:55.706133 | orchestrator | Thursday 05 March 2026 00:55:48 +0000 (0:00:00.399) 0:07:32.923 ******** 2026-03-05 00:55:55.706139 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.706146 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.706152 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.706158 | orchestrator | 2026-03-05 00:55:55.706164 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-05 00:55:55.706170 | orchestrator | Thursday 05 March 2026 00:55:49 +0000 (0:00:00.389) 0:07:33.312 ******** 2026-03-05 00:55:55.706176 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.706183 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.706189 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.706195 | orchestrator | 2026-03-05 00:55:55.706201 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-05 00:55:55.706208 | orchestrator | Thursday 05 March 2026 00:55:49 +0000 (0:00:00.402) 0:07:33.715 ******** 2026-03-05 00:55:55.706214 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:55.706220 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:55.706226 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:55.706232 | orchestrator | 2026-03-05 00:55:55.706238 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-05 00:55:55.706244 | orchestrator | Thursday 05 March 2026 00:55:50 +0000 (0:00:00.409) 0:07:34.125 ******** 2026-03-05 00:55:55.706250 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:55.706257 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:55.706263 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:55.706269 | orchestrator | 2026-03-05 00:55:55.706275 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-05 00:55:55.706281 | orchestrator | Thursday 05 March 2026 00:55:51 +0000 (0:00:01.438) 0:07:35.563 ******** 2026-03-05 00:55:55.706287 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:55.706294 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:55.706300 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:55.706306 | orchestrator | 2026-03-05 00:55:55.706312 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:55:55.706319 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-05 00:55:55.706326 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-05 00:55:55.706333 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-05 00:55:55.706344 | orchestrator | 2026-03-05 00:55:55.706350 | orchestrator | 2026-03-05 00:55:55.706360 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:55:55.706367 | orchestrator | Thursday 05 March 2026 00:55:52 +0000 (0:00:01.037) 0:07:36.601 ******** 2026-03-05 00:55:55.706373 | orchestrator | =============================================================================== 2026-03-05 00:55:55.706379 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 11.58s 2026-03-05 00:55:55.706385 | orchestrator | loadbalancer : Start backup keepalived container ----------------------- 10.47s 2026-03-05 00:55:55.706391 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.21s 2026-03-05 00:55:55.706398 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 8.09s 2026-03-05 00:55:55.706404 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.51s 2026-03-05 00:55:55.706410 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.18s 2026-03-05 00:55:55.706416 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 6.13s 2026-03-05 00:55:55.706422 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.00s 2026-03-05 00:55:55.706428 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 5.89s 2026-03-05 00:55:55.706434 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.40s 2026-03-05 00:55:55.706440 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.38s 2026-03-05 00:55:55.706446 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.34s 2026-03-05 00:55:55.706452 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.32s 2026-03-05 00:55:55.706459 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.14s 2026-03-05 00:55:55.706465 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 5.07s 2026-03-05 00:55:55.706471 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 4.82s 2026-03-05 00:55:55.706477 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.81s 2026-03-05 00:55:55.706483 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.76s 2026-03-05 00:55:55.706489 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.59s 2026-03-05 00:55:55.706495 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.57s 2026-03-05 00:55:55.706506 | orchestrator | 2026-03-05 00:55:55 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:58.727869 | orchestrator | 2026-03-05 00:55:58 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:55:58.727968 | orchestrator | 2026-03-05 00:55:58 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:55:58.727977 | orchestrator | 2026-03-05 00:55:58 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:55:58.727984 | orchestrator | 2026-03-05 00:55:58 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:01.765276 | orchestrator | 2026-03-05 00:56:01 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:56:01.767579 | orchestrator | 2026-03-05 00:56:01 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:56:01.767622 | orchestrator | 2026-03-05 00:56:01 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:56:01.767795 | orchestrator | 2026-03-05 00:56:01 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:04.797974 | orchestrator | 2026-03-05 00:56:04 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:56:04.800362 | orchestrator | 2026-03-05 00:56:04 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:56:04.802286 | orchestrator | 2026-03-05 00:56:04 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:56:04.802679 | orchestrator | 2026-03-05 00:56:04 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:07.835623 | orchestrator | 2026-03-05 00:56:07 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:56:07.835855 | orchestrator | 2026-03-05 00:56:07 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:56:07.838413 | orchestrator | 2026-03-05 00:56:07 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:56:07.838474 | orchestrator | 2026-03-05 00:56:07 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:10.874168 | orchestrator | 2026-03-05 00:56:10 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:56:10.876815 | orchestrator | 2026-03-05 00:56:10 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:56:10.878003 | orchestrator | 2026-03-05 00:56:10 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:56:10.878075 | orchestrator | 2026-03-05 00:56:10 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:13.917302 | orchestrator | 2026-03-05 00:56:13 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:56:13.917965 | orchestrator | 2026-03-05 00:56:13 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:56:13.924459 | orchestrator | 2026-03-05 00:56:13 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:56:13.924499 | orchestrator | 2026-03-05 00:56:13 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:16.966640 | orchestrator | 2026-03-05 00:56:16 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:56:16.967552 | orchestrator | 2026-03-05 00:56:16 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:56:16.968634 | orchestrator | 2026-03-05 00:56:16 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:56:16.968770 | orchestrator | 2026-03-05 00:56:16 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:20.009369 | orchestrator | 2026-03-05 00:56:20 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:56:20.009772 | orchestrator | 2026-03-05 00:56:20 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:56:20.011652 | orchestrator | 2026-03-05 00:56:20 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:56:20.011698 | orchestrator | 2026-03-05 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:23.055693 | orchestrator | 2026-03-05 00:56:23 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:56:23.057496 | orchestrator | 2026-03-05 00:56:23 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:56:23.066758 | orchestrator | 2026-03-05 00:56:23 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:56:23.066884 | orchestrator | 2026-03-05 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:26.113561 | orchestrator | 2026-03-05 00:56:26 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:56:26.115286 | orchestrator | 2026-03-05 00:56:26 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:56:26.117425 | orchestrator | 2026-03-05 00:56:26 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:56:26.117489 | orchestrator | 2026-03-05 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:29.178554 | orchestrator | 2026-03-05 00:56:29 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:56:29.178641 | orchestrator | 2026-03-05 00:56:29 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:56:29.179438 | orchestrator | 2026-03-05 00:56:29 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:56:29.179627 | orchestrator | 2026-03-05 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:32.207341 | orchestrator | 2026-03-05 00:56:32 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:56:32.208097 | orchestrator | 2026-03-05 00:56:32 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:56:32.208751 | orchestrator | 2026-03-05 00:56:32 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:56:32.208869 | orchestrator | 2026-03-05 00:56:32 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:35.245579 | orchestrator | 2026-03-05 00:56:35 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:56:35.250649 | orchestrator | 2026-03-05 00:56:35 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:56:35.251889 | orchestrator | 2026-03-05 00:56:35 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:56:35.252448 | orchestrator | 2026-03-05 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:38.302159 | orchestrator | 2026-03-05 00:56:38 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:56:38.303433 | orchestrator | 2026-03-05 00:56:38 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:56:38.305493 | orchestrator | 2026-03-05 00:56:38 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:56:38.305578 | orchestrator | 2026-03-05 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:41.345825 | orchestrator | 2026-03-05 00:56:41 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:56:41.348203 | orchestrator | 2026-03-05 00:56:41 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:56:41.349252 | orchestrator | 2026-03-05 00:56:41 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:56:41.349278 | orchestrator | 2026-03-05 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:44.386106 | orchestrator | 2026-03-05 00:56:44 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:56:44.387841 | orchestrator | 2026-03-05 00:56:44 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:56:44.389533 | orchestrator | 2026-03-05 00:56:44 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:56:44.390590 | orchestrator | 2026-03-05 00:56:44 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:47.425607 | orchestrator | 2026-03-05 00:56:47 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:56:47.428006 | orchestrator | 2026-03-05 00:56:47 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:56:47.431231 | orchestrator | 2026-03-05 00:56:47 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:56:47.431287 | orchestrator | 2026-03-05 00:56:47 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:50.471634 | orchestrator | 2026-03-05 00:56:50 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:56:50.473833 | orchestrator | 2026-03-05 00:56:50 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:56:50.476945 | orchestrator | 2026-03-05 00:56:50 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:56:50.476999 | orchestrator | 2026-03-05 00:56:50 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:53.525158 | orchestrator | 2026-03-05 00:56:53 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:56:53.526926 | orchestrator | 2026-03-05 00:56:53 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:56:53.528681 | orchestrator | 2026-03-05 00:56:53 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:56:53.530142 | orchestrator | 2026-03-05 00:56:53 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:56.572135 | orchestrator | 2026-03-05 00:56:56 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:56:56.574305 | orchestrator | 2026-03-05 00:56:56 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:56:56.575541 | orchestrator | 2026-03-05 00:56:56 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:56:56.575587 | orchestrator | 2026-03-05 00:56:56 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:59.614602 | orchestrator | 2026-03-05 00:56:59 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:56:59.615585 | orchestrator | 2026-03-05 00:56:59 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:56:59.616609 | orchestrator | 2026-03-05 00:56:59 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:56:59.616670 | orchestrator | 2026-03-05 00:56:59 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:02.655841 | orchestrator | 2026-03-05 00:57:02 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:57:02.658144 | orchestrator | 2026-03-05 00:57:02 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:57:02.660374 | orchestrator | 2026-03-05 00:57:02 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:57:02.660433 | orchestrator | 2026-03-05 00:57:02 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:05.695235 | orchestrator | 2026-03-05 00:57:05 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:57:05.697677 | orchestrator | 2026-03-05 00:57:05 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:57:05.698986 | orchestrator | 2026-03-05 00:57:05 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:57:05.699032 | orchestrator | 2026-03-05 00:57:05 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:08.738651 | orchestrator | 2026-03-05 00:57:08 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:57:08.739965 | orchestrator | 2026-03-05 00:57:08 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:57:08.741954 | orchestrator | 2026-03-05 00:57:08 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:57:08.742114 | orchestrator | 2026-03-05 00:57:08 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:11.783540 | orchestrator | 2026-03-05 00:57:11 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:57:11.785191 | orchestrator | 2026-03-05 00:57:11 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:57:11.786545 | orchestrator | 2026-03-05 00:57:11 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:57:11.786584 | orchestrator | 2026-03-05 00:57:11 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:14.824418 | orchestrator | 2026-03-05 00:57:14 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:57:14.825169 | orchestrator | 2026-03-05 00:57:14 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:57:14.826719 | orchestrator | 2026-03-05 00:57:14 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:57:14.826770 | orchestrator | 2026-03-05 00:57:14 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:17.865931 | orchestrator | 2026-03-05 00:57:17 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:57:17.867455 | orchestrator | 2026-03-05 00:57:17 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:57:17.868843 | orchestrator | 2026-03-05 00:57:17 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:57:17.868878 | orchestrator | 2026-03-05 00:57:17 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:20.918085 | orchestrator | 2026-03-05 00:57:20 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:57:20.919555 | orchestrator | 2026-03-05 00:57:20 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:57:20.921352 | orchestrator | 2026-03-05 00:57:20 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:57:20.921428 | orchestrator | 2026-03-05 00:57:20 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:23.965325 | orchestrator | 2026-03-05 00:57:23 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:57:23.965909 | orchestrator | 2026-03-05 00:57:23 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:57:23.967991 | orchestrator | 2026-03-05 00:57:23 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:57:23.968017 | orchestrator | 2026-03-05 00:57:23 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:27.005430 | orchestrator | 2026-03-05 00:57:27 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:57:27.007619 | orchestrator | 2026-03-05 00:57:27 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:57:27.010235 | orchestrator | 2026-03-05 00:57:27 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:57:27.010598 | orchestrator | 2026-03-05 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:30.071821 | orchestrator | 2026-03-05 00:57:30 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:57:30.073970 | orchestrator | 2026-03-05 00:57:30 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:57:30.075444 | orchestrator | 2026-03-05 00:57:30 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:57:30.075555 | orchestrator | 2026-03-05 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:33.111025 | orchestrator | 2026-03-05 00:57:33 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:57:33.111833 | orchestrator | 2026-03-05 00:57:33 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:57:33.112949 | orchestrator | 2026-03-05 00:57:33 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:57:33.113336 | orchestrator | 2026-03-05 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:36.155363 | orchestrator | 2026-03-05 00:57:36 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:57:36.156824 | orchestrator | 2026-03-05 00:57:36 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:57:36.157937 | orchestrator | 2026-03-05 00:57:36 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:57:36.157987 | orchestrator | 2026-03-05 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:39.200740 | orchestrator | 2026-03-05 00:57:39 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:57:39.205619 | orchestrator | 2026-03-05 00:57:39 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:57:39.207304 | orchestrator | 2026-03-05 00:57:39 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:57:39.207374 | orchestrator | 2026-03-05 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:42.246006 | orchestrator | 2026-03-05 00:57:42 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:57:42.246788 | orchestrator | 2026-03-05 00:57:42 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:57:42.247703 | orchestrator | 2026-03-05 00:57:42 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:57:42.247730 | orchestrator | 2026-03-05 00:57:42 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:45.282678 | orchestrator | 2026-03-05 00:57:45 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:57:45.283941 | orchestrator | 2026-03-05 00:57:45 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:57:45.285730 | orchestrator | 2026-03-05 00:57:45 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:57:45.285774 | orchestrator | 2026-03-05 00:57:45 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:48.327397 | orchestrator | 2026-03-05 00:57:48 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:57:48.329395 | orchestrator | 2026-03-05 00:57:48 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:57:48.329714 | orchestrator | 2026-03-05 00:57:48 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:57:48.329746 | orchestrator | 2026-03-05 00:57:48 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:51.367333 | orchestrator | 2026-03-05 00:57:51 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:57:51.368781 | orchestrator | 2026-03-05 00:57:51 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:57:51.370942 | orchestrator | 2026-03-05 00:57:51 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:57:51.370985 | orchestrator | 2026-03-05 00:57:51 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:54.405439 | orchestrator | 2026-03-05 00:57:54 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:57:54.406869 | orchestrator | 2026-03-05 00:57:54 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:57:54.409669 | orchestrator | 2026-03-05 00:57:54 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:57:54.409728 | orchestrator | 2026-03-05 00:57:54 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:57.447162 | orchestrator | 2026-03-05 00:57:57 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:57:57.448432 | orchestrator | 2026-03-05 00:57:57 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:57:57.450202 | orchestrator | 2026-03-05 00:57:57 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state STARTED 2026-03-05 00:57:57.450252 | orchestrator | 2026-03-05 00:57:57 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:00.497312 | orchestrator | 2026-03-05 00:58:00 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:58:00.498962 | orchestrator | 2026-03-05 00:58:00 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:58:00.506515 | orchestrator | 2026-03-05 00:58:00 | INFO  | Task 7c5fe29a-3db9-4cc1-990e-b5e2f433ae0f is in state SUCCESS 2026-03-05 00:58:00.506637 | orchestrator | 2026-03-05 00:58:00.508003 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-05 00:58:00.508043 | orchestrator | 2.16.14 2026-03-05 00:58:00.508048 | orchestrator | 2026-03-05 00:58:00.508053 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-05 00:58:00.508057 | orchestrator | 2026-03-05 00:58:00.508061 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-05 00:58:00.508066 | orchestrator | Thursday 05 March 2026 00:45:36 +0000 (0:00:01.167) 0:00:01.167 ******** 2026-03-05 00:58:00.508136 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.508142 | orchestrator | 2026-03-05 00:58:00.508146 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-05 00:58:00.508150 | orchestrator | Thursday 05 March 2026 00:45:37 +0000 (0:00:01.390) 0:00:02.557 ******** 2026-03-05 00:58:00.508154 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.508158 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.508162 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.508171 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.508175 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.508178 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.508182 | orchestrator | 2026-03-05 00:58:00.508186 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-05 00:58:00.508190 | orchestrator | Thursday 05 March 2026 00:45:39 +0000 (0:00:01.862) 0:00:04.420 ******** 2026-03-05 00:58:00.508194 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.508197 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.508201 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.508212 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.508217 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.508225 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.508234 | orchestrator | 2026-03-05 00:58:00.508240 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-05 00:58:00.508246 | orchestrator | Thursday 05 March 2026 00:45:40 +0000 (0:00:00.894) 0:00:05.315 ******** 2026-03-05 00:58:00.508252 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.508258 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.508290 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.508297 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.508314 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.508320 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.508326 | orchestrator | 2026-03-05 00:58:00.508332 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-05 00:58:00.508339 | orchestrator | Thursday 05 March 2026 00:45:41 +0000 (0:00:01.135) 0:00:06.450 ******** 2026-03-05 00:58:00.508376 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.508385 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.508392 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.508399 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.508405 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.508412 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.508419 | orchestrator | 2026-03-05 00:58:00.508425 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-05 00:58:00.508437 | orchestrator | Thursday 05 March 2026 00:45:42 +0000 (0:00:00.835) 0:00:07.285 ******** 2026-03-05 00:58:00.508441 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.508445 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.508453 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.508456 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.508460 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.508464 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.508481 | orchestrator | 2026-03-05 00:58:00.508494 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-05 00:58:00.508498 | orchestrator | Thursday 05 March 2026 00:45:43 +0000 (0:00:00.804) 0:00:08.090 ******** 2026-03-05 00:58:00.508502 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.508514 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.508518 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.508522 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.508537 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.508540 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.508544 | orchestrator | 2026-03-05 00:58:00.508548 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-05 00:58:00.508552 | orchestrator | Thursday 05 March 2026 00:45:44 +0000 (0:00:01.096) 0:00:09.187 ******** 2026-03-05 00:58:00.508556 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.508560 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.508572 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.508576 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.508579 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.508583 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.508587 | orchestrator | 2026-03-05 00:58:00.508591 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-05 00:58:00.508594 | orchestrator | Thursday 05 March 2026 00:45:45 +0000 (0:00:00.980) 0:00:10.167 ******** 2026-03-05 00:58:00.508598 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.508602 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.508621 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.508626 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.508630 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.508634 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.508639 | orchestrator | 2026-03-05 00:58:00.508643 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-05 00:58:00.508647 | orchestrator | Thursday 05 March 2026 00:45:46 +0000 (0:00:00.999) 0:00:11.166 ******** 2026-03-05 00:58:00.508652 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-05 00:58:00.508657 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-05 00:58:00.508661 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-05 00:58:00.508666 | orchestrator | 2026-03-05 00:58:00.508670 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-05 00:58:00.508675 | orchestrator | Thursday 05 March 2026 00:45:47 +0000 (0:00:00.860) 0:00:12.027 ******** 2026-03-05 00:58:00.508683 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.508688 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.508692 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.508709 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.508714 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.508718 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.508723 | orchestrator | 2026-03-05 00:58:00.508727 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-05 00:58:00.508732 | orchestrator | Thursday 05 March 2026 00:45:48 +0000 (0:00:01.498) 0:00:13.526 ******** 2026-03-05 00:58:00.508736 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-05 00:58:00.508741 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-05 00:58:00.508745 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-05 00:58:00.508749 | orchestrator | 2026-03-05 00:58:00.508754 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-05 00:58:00.508758 | orchestrator | Thursday 05 March 2026 00:45:51 +0000 (0:00:02.549) 0:00:16.075 ******** 2026-03-05 00:58:00.508762 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-05 00:58:00.508767 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-05 00:58:00.508771 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-05 00:58:00.508775 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.508780 | orchestrator | 2026-03-05 00:58:00.508784 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-05 00:58:00.508788 | orchestrator | Thursday 05 March 2026 00:45:52 +0000 (0:00:01.371) 0:00:17.446 ******** 2026-03-05 00:58:00.508794 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.508800 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.508805 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.508809 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.508814 | orchestrator | 2026-03-05 00:58:00.508818 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-05 00:58:00.508823 | orchestrator | Thursday 05 March 2026 00:45:53 +0000 (0:00:01.176) 0:00:18.622 ******** 2026-03-05 00:58:00.508828 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.508834 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.508839 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.508846 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.508850 | orchestrator | 2026-03-05 00:58:00.508860 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-05 00:58:00.508864 | orchestrator | Thursday 05 March 2026 00:45:54 +0000 (0:00:00.891) 0:00:19.514 ******** 2026-03-05 00:58:00.508876 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-05 00:45:49.667351', 'end': '2026-03-05 00:45:49.755261', 'delta': '0:00:00.087910', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.508883 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-05 00:45:50.605664', 'end': '2026-03-05 00:45:50.710178', 'delta': '0:00:00.104514', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.508888 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-05 00:45:51.176048', 'end': '2026-03-05 00:45:51.271277', 'delta': '0:00:00.095229', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.508893 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.508897 | orchestrator | 2026-03-05 00:58:00.508902 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-05 00:58:00.508906 | orchestrator | Thursday 05 March 2026 00:45:55 +0000 (0:00:00.577) 0:00:20.092 ******** 2026-03-05 00:58:00.508911 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.508915 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.508919 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.508924 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.508928 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.508938 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.508942 | orchestrator | 2026-03-05 00:58:00.508947 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-05 00:58:00.508951 | orchestrator | Thursday 05 March 2026 00:45:57 +0000 (0:00:02.490) 0:00:22.582 ******** 2026-03-05 00:58:00.508956 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-05 00:58:00.508964 | orchestrator | 2026-03-05 00:58:00.508968 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-05 00:58:00.508973 | orchestrator | Thursday 05 March 2026 00:45:59 +0000 (0:00:01.387) 0:00:23.969 ******** 2026-03-05 00:58:00.508987 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.508991 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.508996 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.509003 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.509007 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.509011 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.509015 | orchestrator | 2026-03-05 00:58:00.509024 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-05 00:58:00.509028 | orchestrator | Thursday 05 March 2026 00:46:02 +0000 (0:00:02.779) 0:00:26.749 ******** 2026-03-05 00:58:00.509031 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.509035 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.509042 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.509046 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.509050 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.509053 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.509057 | orchestrator | 2026-03-05 00:58:00.509080 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-05 00:58:00.509085 | orchestrator | Thursday 05 March 2026 00:46:05 +0000 (0:00:03.000) 0:00:29.750 ******** 2026-03-05 00:58:00.509088 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.509092 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.509096 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.509100 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.509103 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.509107 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.509111 | orchestrator | 2026-03-05 00:58:00.509114 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-05 00:58:00.509118 | orchestrator | Thursday 05 March 2026 00:46:06 +0000 (0:00:01.671) 0:00:31.421 ******** 2026-03-05 00:58:00.509122 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.509126 | orchestrator | 2026-03-05 00:58:00.509129 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-05 00:58:00.509133 | orchestrator | Thursday 05 March 2026 00:46:07 +0000 (0:00:00.378) 0:00:31.799 ******** 2026-03-05 00:58:00.509137 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.509141 | orchestrator | 2026-03-05 00:58:00.509144 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-05 00:58:00.509148 | orchestrator | Thursday 05 March 2026 00:46:07 +0000 (0:00:00.622) 0:00:32.422 ******** 2026-03-05 00:58:00.509152 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.509156 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.509159 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.509168 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.509172 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.509176 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.509179 | orchestrator | 2026-03-05 00:58:00.509183 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-05 00:58:00.509187 | orchestrator | Thursday 05 March 2026 00:46:08 +0000 (0:00:00.996) 0:00:33.419 ******** 2026-03-05 00:58:00.509191 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.509194 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.509198 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.509202 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.509205 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.509209 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.509213 | orchestrator | 2026-03-05 00:58:00.509216 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-05 00:58:00.509220 | orchestrator | Thursday 05 March 2026 00:46:10 +0000 (0:00:01.575) 0:00:34.994 ******** 2026-03-05 00:58:00.509224 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.509227 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.509231 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.509238 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.509241 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.509245 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.509249 | orchestrator | 2026-03-05 00:58:00.509252 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-05 00:58:00.509256 | orchestrator | Thursday 05 March 2026 00:46:11 +0000 (0:00:00.803) 0:00:35.798 ******** 2026-03-05 00:58:00.509260 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.509264 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.509267 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.509271 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.509275 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.509278 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.509282 | orchestrator | 2026-03-05 00:58:00.509286 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-05 00:58:00.509289 | orchestrator | Thursday 05 March 2026 00:46:12 +0000 (0:00:00.879) 0:00:36.678 ******** 2026-03-05 00:58:00.509293 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.509297 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.509301 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.509304 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.509308 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.509312 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.509315 | orchestrator | 2026-03-05 00:58:00.509319 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-05 00:58:00.509323 | orchestrator | Thursday 05 March 2026 00:46:12 +0000 (0:00:00.906) 0:00:37.584 ******** 2026-03-05 00:58:00.509326 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.509330 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.509334 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.509337 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.509341 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.509345 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.509349 | orchestrator | 2026-03-05 00:58:00.509352 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-05 00:58:00.509356 | orchestrator | Thursday 05 March 2026 00:46:14 +0000 (0:00:01.744) 0:00:39.329 ******** 2026-03-05 00:58:00.509360 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.509364 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.509367 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.509371 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.509375 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.509381 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.509421 | orchestrator | 2026-03-05 00:58:00.509428 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-05 00:58:00.509433 | orchestrator | Thursday 05 March 2026 00:46:15 +0000 (0:00:00.985) 0:00:40.315 ******** 2026-03-05 00:58:00.509440 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f88409fd--5147--5194--8288--2488b5e44352-osd--block--f88409fd--5147--5194--8288--2488b5e44352', 'dm-uuid-LVM-TF2aYQ1gcI3opAwWGnpIMDJl6d8DlJBZKYpryDGlNdcI2vO1IQcI176nGOGfrZZB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.509447 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9d6733ad--9ad8--5bce--b749--e645aedee181-osd--block--9d6733ad--9ad8--5bce--b749--e645aedee181', 'dm-uuid-LVM-UEVEB0cZjoklxsfZk5hz7YwDzzENqXERbVNoNDV9w1eHnvNFLMJYbXXgxazLyb4w'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.509485 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.509495 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.509502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.509510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.509517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.509524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.509531 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--130794de--baff--5f0b--9c30--9a8206b73831-osd--block--130794de--baff--5f0b--9c30--9a8206b73831', 'dm-uuid-LVM-6SxwFyILndwXvKVHabqVnqJVbiSceNTQI62kIoEWE4ddPGfqexPf4TEVW3OPAMve'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.509539 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.509546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.509571 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--54671a7c--dad9--563e--9508--4448c9acfc6a-osd--block--54671a7c--dad9--563e--9508--4448c9acfc6a', 'dm-uuid-LVM-GzgvoDDFvX2TwyrNJxloOIgXhzHvcOX3dh3GYtgbr1lY7Iy9wJxSNzOE1zAHceVu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.509577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.509583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1', 'scsi-SQEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part1', 'scsi-SQEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part14', 'scsi-SQEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part15', 'scsi-SQEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part16', 'scsi-SQEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 00:58:00.509588 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.509595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f88409fd--5147--5194--8288--2488b5e44352-osd--block--f88409fd--5147--5194--8288--2488b5e44352'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jdWfEB-N83k-MDWn-BOLC-ihm4-IydT-Dpp4Ol', 'scsi-0QEMU_QEMU_HARDDISK_46af06ac-e806-45b3-baa6-786374d24d95', 'scsi-SQEMU_QEMU_HARDDISK_46af06ac-e806-45b3-baa6-786374d24d95'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 00:58:00.509608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.509616 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9d6733ad--9ad8--5bce--b749--e645aedee181-osd--block--9d6733ad--9ad8--5bce--b749--e645aedee181'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V7ZD6i-hIWS-JtXW-HcWn-0dcX-ecnk-fIwTEz', 'scsi-0QEMU_QEMU_HARDDISK_94265553-26b7-47c9-a922-5463d2be5f34', 'scsi-SQEMU_QEMU_HARDDISK_94265553-26b7-47c9-a922-5463d2be5f34'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 00:58:00.509623 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.509629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_db99048b-c1ef-4f9e-82d3-cd84d3f63e80', 'scsi-SQEMU_QEMU_HARDDISK_db99048b-c1ef-4f9e-82d3-cd84d3f63e80'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 00:58:00.509637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.509644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 00:58:00.509655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.509662 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.509675 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.509681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7f4ff93a--c4fd--5f9b--af1c--107d8e49bf15-osd--block--7f4ff93a--c4fd--5f9b--af1c--107d8e49bf15', 'dm-uuid-LVM-Pf3XoZa14DA1N8trbcyuXz1HFWwultSjyo0RNgMBhzdapfZ8f9kjAwVQTfyGGbwo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.509686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f', 'scsi-SQEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part1', 'scsi-SQEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part14', 'scsi-SQEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part15', 'scsi-SQEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part16', 'scsi-SQEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 00:58:00.509696 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--130794de--baff--5f0b--9c30--9a8206b73831-osd--block--130794de--baff--5f0b--9c30--9a8206b73831'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zxJquG-kgIY-dbro-xDa2-2Hhj-fSLP-y9EZ7f', 'scsi-0QEMU_QEMU_HARDDISK_fde3cda2-3067-4d86-95c6-d39f62804520', 'scsi-SQEMU_QEMU_HARDDISK_fde3cda2-3067-4d86-95c6-d39f62804520'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 00:58:00.509959 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--56dff28b--2239--50bc--bb4f--66f9aa80ba88-osd--block--56dff28b--2239--50bc--bb4f--66f9aa80ba88', 'dm-uuid-LVM-7021JZpIOlxZSNvSousoCRWY6EUPV9VtGoV6JFDR2ugTDoJu1wseGz0A83f6v6Gj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.509971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--54671a7c--dad9--563e--9508--4448c9acfc6a-osd--block--54671a7c--dad9--563e--9508--4448c9acfc6a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Ijjo4n-FAhc-UcPL-RECK-8Umb-4nOw-0gbpuM', 'scsi-0QEMU_QEMU_HARDDISK_5fc6e5d1-feaa-44be-badf-9551630a8ded', 'scsi-SQEMU_QEMU_HARDDISK_5fc6e5d1-feaa-44be-badf-9551630a8ded'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 00:58:00.509975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.509979 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.509984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4cc63ee5-51dc-4d14-b9fb-faf031b30aaa', 'scsi-SQEMU_QEMU_HARDDISK_4cc63ee5-51dc-4d14-b9fb-faf031b30aaa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 00:58:00.509988 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-03-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 00:58:00.509996 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510001 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510046 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510108 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510121 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.510127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68', 'scsi-SQEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part1', 'scsi-SQEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part14', 'scsi-SQEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part15', 'scsi-SQEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part16', 'scsi-SQEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 00:58:00.510205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510233 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7f4ff93a--c4fd--5f9b--af1c--107d8e49bf15-osd--block--7f4ff93a--c4fd--5f9b--af1c--107d8e49bf15'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FuhEQw-hBkB-kamn-cyjG-liQC-9xZP-ztM27Q', 'scsi-0QEMU_QEMU_HARDDISK_51d29519-c1f9-43c2-8da2-810d6ee2cf1d', 'scsi-SQEMU_QEMU_HARDDISK_51d29519-c1f9-43c2-8da2-810d6ee2cf1d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 00:58:00.510249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e11f2b2-c673-403d-8d8e-e558e292c82f', 'scsi-SQEMU_QEMU_HARDDISK_5e11f2b2-c673-403d-8d8e-e558e292c82f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e11f2b2-c673-403d-8d8e-e558e292c82f-part1', 'scsi-SQEMU_QEMU_HARDDISK_5e11f2b2-c673-403d-8d8e-e558e292c82f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e11f2b2-c673-403d-8d8e-e558e292c82f-part14', 'scsi-SQEMU_QEMU_HARDDISK_5e11f2b2-c673-403d-8d8e-e558e292c82f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e11f2b2-c673-403d-8d8e-e558e292c82f-part15', 'scsi-SQEMU_QEMU_HARDDISK_5e11f2b2-c673-403d-8d8e-e558e292c82f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e11f2b2-c673-403d-8d8e-e558e292c82f-part16', 'scsi-SQEMU_QEMU_HARDDISK_5e11f2b2-c673-403d-8d8e-e558e292c82f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 00:58:00.510259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--56dff28b--2239--50bc--bb4f--66f9aa80ba88-osd--block--56dff28b--2239--50bc--bb4f--66f9aa80ba88'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xeeaVk-tk58-c70M-ecxI-uAuR-vNFi-S3719x', 'scsi-0QEMU_QEMU_HARDDISK_01b35dfc-cc13-430f-9521-065aaefb7085', 'scsi-SQEMU_QEMU_HARDDISK_01b35dfc-cc13-430f-9521-065aaefb7085'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 00:58:00.510272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 00:58:00.510278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3d47084-7273-4e4c-b048-5cf25f7ffc67', 'scsi-SQEMU_QEMU_HARDDISK_f3d47084-7273-4e4c-b048-5cf25f7ffc67'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 00:58:00.510292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 00:58:00.510324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97a4e51c-10c3-49c3-9fc4-94e957861be7', 'scsi-SQEMU_QEMU_HARDDISK_97a4e51c-10c3-49c3-9fc4-94e957861be7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97a4e51c-10c3-49c3-9fc4-94e957861be7-part1', 'scsi-SQEMU_QEMU_HARDDISK_97a4e51c-10c3-49c3-9fc4-94e957861be7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97a4e51c-10c3-49c3-9fc4-94e957861be7-part14', 'scsi-SQEMU_QEMU_HARDDISK_97a4e51c-10c3-49c3-9fc4-94e957861be7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97a4e51c-10c3-49c3-9fc4-94e957861be7-part15', 'scsi-SQEMU_QEMU_HARDDISK_97a4e51c-10c3-49c3-9fc4-94e957861be7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97a4e51c-10c3-49c3-9fc4-94e957861be7-part16', 'scsi-SQEMU_QEMU_HARDDISK_97a4e51c-10c3-49c3-9fc4-94e957861be7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 00:58:00.510434 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.510442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 00:58:00.510466 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.510471 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.510475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 00:58:00.510553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8ef373d-acca-493f-badf-3a0028b34dd0', 'scsi-SQEMU_QEMU_HARDDISK_a8ef373d-acca-493f-badf-3a0028b34dd0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8ef373d-acca-493f-badf-3a0028b34dd0-part1', 'scsi-SQEMU_QEMU_HARDDISK_a8ef373d-acca-493f-badf-3a0028b34dd0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8ef373d-acca-493f-badf-3a0028b34dd0-part14', 'scsi-SQEMU_QEMU_HARDDISK_a8ef373d-acca-493f-badf-3a0028b34dd0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8ef373d-acca-493f-badf-3a0028b34dd0-part15', 'scsi-SQEMU_QEMU_HARDDISK_a8ef373d-acca-493f-badf-3a0028b34dd0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8ef373d-acca-493f-badf-3a0028b34dd0-part16', 'scsi-SQEMU_QEMU_HARDDISK_a8ef373d-acca-493f-badf-3a0028b34dd0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 00:58:00.510564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-03-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 00:58:00.510569 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.510572 | orchestrator | 2026-03-05 00:58:00.510576 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-05 00:58:00.510580 | orchestrator | Thursday 05 March 2026 00:46:17 +0000 (0:00:01.933) 0:00:42.248 ******** 2026-03-05 00:58:00.510585 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f88409fd--5147--5194--8288--2488b5e44352-osd--block--f88409fd--5147--5194--8288--2488b5e44352', 'dm-uuid-LVM-TF2aYQ1gcI3opAwWGnpIMDJl6d8DlJBZKYpryDGlNdcI2vO1IQcI176nGOGfrZZB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.510593 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9d6733ad--9ad8--5bce--b749--e645aedee181-osd--block--9d6733ad--9ad8--5bce--b749--e645aedee181', 'dm-uuid-LVM-UEVEB0cZjoklxsfZk5hz7YwDzzENqXERbVNoNDV9w1eHnvNFLMJYbXXgxazLyb4w'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.510599 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.510606 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.510612 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.510625 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--130794de--baff--5f0b--9c30--9a8206b73831-osd--block--130794de--baff--5f0b--9c30--9a8206b73831', 'dm-uuid-LVM-6SxwFyILndwXvKVHabqVnqJVbiSceNTQI62kIoEWE4ddPGfqexPf4TEVW3OPAMve'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.510635 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--54671a7c--dad9--563e--9508--4448c9acfc6a-osd--block--54671a7c--dad9--563e--9508--4448c9acfc6a', 'dm-uuid-LVM-GzgvoDDFvX2TwyrNJxloOIgXhzHvcOX3dh3GYtgbr1lY7Iy9wJxSNzOE1zAHceVu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.510647 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.510653 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.510659 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.510679 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.510700 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.510708 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.510811 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.510822 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.510831 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.510839 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.510862 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1', 'scsi-SQEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part1', 'scsi-SQEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part14', 'scsi-SQEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part15', 'scsi-SQEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part16', 'scsi-SQEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.510889 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.510902 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f88409fd--5147--5194--8288--2488b5e44352-osd--block--f88409fd--5147--5194--8288--2488b5e44352'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jdWfEB-N83k-MDWn-BOLC-ihm4-IydT-Dpp4Ol', 'scsi-0QEMU_QEMU_HARDDISK_46af06ac-e806-45b3-baa6-786374d24d95', 'scsi-SQEMU_QEMU_HARDDISK_46af06ac-e806-45b3-baa6-786374d24d95'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.510914 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.510937 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9d6733ad--9ad8--5bce--b749--e645aedee181-osd--block--9d6733ad--9ad8--5bce--b749--e645aedee181'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V7ZD6i-hIWS-JtXW-HcWn-0dcX-ecnk-fIwTEz', 'scsi-0QEMU_QEMU_HARDDISK_94265553-26b7-47c9-a922-5463d2be5f34', 'scsi-SQEMU_QEMU_HARDDISK_94265553-26b7-47c9-a922-5463d2be5f34'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.510964 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.510982 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_db99048b-c1ef-4f9e-82d3-cd84d3f63e80', 'scsi-SQEMU_QEMU_HARDDISK_db99048b-c1ef-4f9e-82d3-cd84d3f63e80'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511007 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f', 'scsi-SQEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part1', 'scsi-SQEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part14', 'scsi-SQEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part15', 'scsi-SQEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part16', 'scsi-SQEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511019 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511037 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--130794de--baff--5f0b--9c30--9a8206b73831-osd--block--130794de--baff--5f0b--9c30--9a8206b73831'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zxJquG-kgIY-dbro-xDa2-2Hhj-fSLP-y9EZ7f', 'scsi-0QEMU_QEMU_HARDDISK_fde3cda2-3067-4d86-95c6-d39f62804520', 'scsi-SQEMU_QEMU_HARDDISK_fde3cda2-3067-4d86-95c6-d39f62804520'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511048 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--54671a7c--dad9--563e--9508--4448c9acfc6a-osd--block--54671a7c--dad9--563e--9508--4448c9acfc6a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Ijjo4n-FAhc-UcPL-RECK-8Umb-4nOw-0gbpuM', 'scsi-0QEMU_QEMU_HARDDISK_5fc6e5d1-feaa-44be-badf-9551630a8ded', 'scsi-SQEMU_QEMU_HARDDISK_5fc6e5d1-feaa-44be-badf-9551630a8ded'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511060 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4cc63ee5-51dc-4d14-b9fb-faf031b30aaa', 'scsi-SQEMU_QEMU_HARDDISK_4cc63ee5-51dc-4d14-b9fb-faf031b30aaa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511096 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-03-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511113 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7f4ff93a--c4fd--5f9b--af1c--107d8e49bf15-osd--block--7f4ff93a--c4fd--5f9b--af1c--107d8e49bf15', 'dm-uuid-LVM-Pf3XoZa14DA1N8trbcyuXz1HFWwultSjyo0RNgMBhzdapfZ8f9kjAwVQTfyGGbwo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511125 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--56dff28b--2239--50bc--bb4f--66f9aa80ba88-osd--block--56dff28b--2239--50bc--bb4f--66f9aa80ba88', 'dm-uuid-LVM-7021JZpIOlxZSNvSousoCRWY6EUPV9VtGoV6JFDR2ugTDoJu1wseGz0A83f6v6Gj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511147 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511158 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511170 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.511181 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511195 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511206 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511213 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511246 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511253 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511261 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511267 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511280 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511309 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511315 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511319 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511322 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511326 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511330 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.511696 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e11f2b2-c673-403d-8d8e-e558e292c82f', 'scsi-SQEMU_QEMU_HARDDISK_5e11f2b2-c673-403d-8d8e-e558e292c82f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e11f2b2-c673-403d-8d8e-e558e292c82f-part1', 'scsi-SQEMU_QEMU_HARDDISK_5e11f2b2-c673-403d-8d8e-e558e292c82f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e11f2b2-c673-403d-8d8e-e558e292c82f-part14', 'scsi-SQEMU_QEMU_HARDDISK_5e11f2b2-c673-403d-8d8e-e558e292c82f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e11f2b2-c673-403d-8d8e-e558e292c82f-part15', 'scsi-SQEMU_QEMU_HARDDISK_5e11f2b2-c673-403d-8d8e-e558e292c82f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e11f2b2-c673-403d-8d8e-e558e292c82f-part16', 'scsi-SQEMU_QEMU_HARDDISK_5e11f2b2-c673-403d-8d8e-e558e292c82f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511715 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68', 'scsi-SQEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part1', 'scsi-SQEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part14', 'scsi-SQEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part15', 'scsi-SQEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part16', 'scsi-SQEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511732 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511740 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7f4ff93a--c4fd--5f9b--af1c--107d8e49bf15-osd--block--7f4ff93a--c4fd--5f9b--af1c--107d8e49bf15'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FuhEQw-hBkB-kamn-cyjG-liQC-9xZP-ztM27Q', 'scsi-0QEMU_QEMU_HARDDISK_51d29519-c1f9-43c2-8da2-810d6ee2cf1d', 'scsi-SQEMU_QEMU_HARDDISK_51d29519-c1f9-43c2-8da2-810d6ee2cf1d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511747 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--56dff28b--2239--50bc--bb4f--66f9aa80ba88-osd--block--56dff28b--2239--50bc--bb4f--66f9aa80ba88'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xeeaVk-tk58-c70M-ecxI-uAuR-vNFi-S3719x', 'scsi-0QEMU_QEMU_HARDDISK_01b35dfc-cc13-430f-9521-065aaefb7085', 'scsi-SQEMU_QEMU_HARDDISK_01b35dfc-cc13-430f-9521-065aaefb7085'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511763 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3d47084-7273-4e4c-b048-5cf25f7ffc67', 'scsi-SQEMU_QEMU_HARDDISK_f3d47084-7273-4e4c-b048-5cf25f7ffc67'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511776 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511788 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511794 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511801 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511808 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511814 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.511821 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511826 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.511830 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511849 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511854 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511858 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511862 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511866 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511870 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511881 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511885 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511890 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97a4e51c-10c3-49c3-9fc4-94e957861be7', 'scsi-SQEMU_QEMU_HARDDISK_97a4e51c-10c3-49c3-9fc4-94e957861be7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97a4e51c-10c3-49c3-9fc4-94e957861be7-part1', 'scsi-SQEMU_QEMU_HARDDISK_97a4e51c-10c3-49c3-9fc4-94e957861be7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97a4e51c-10c3-49c3-9fc4-94e957861be7-part14', 'scsi-SQEMU_QEMU_HARDDISK_97a4e51c-10c3-49c3-9fc4-94e957861be7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97a4e51c-10c3-49c3-9fc4-94e957861be7-part15', 'scsi-SQEMU_QEMU_HARDDISK_97a4e51c-10c3-49c3-9fc4-94e957861be7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97a4e51c-10c3-49c3-9fc4-94e957861be7-part16', 'scsi-SQEMU_QEMU_HARDDISK_97a4e51c-10c3-49c3-9fc4-94e957861be7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511895 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511906 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511910 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.511914 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511919 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8ef373d-acca-493f-badf-3a0028b34dd0', 'scsi-SQEMU_QEMU_HARDDISK_a8ef373d-acca-493f-badf-3a0028b34dd0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8ef373d-acca-493f-badf-3a0028b34dd0-part1', 'scsi-SQEMU_QEMU_HARDDISK_a8ef373d-acca-493f-badf-3a0028b34dd0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8ef373d-acca-493f-badf-3a0028b34dd0-part14', 'scsi-SQEMU_QEMU_HARDDISK_a8ef373d-acca-493f-badf-3a0028b34dd0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8ef373d-acca-493f-badf-3a0028b34dd0-part15', 'scsi-SQEMU_QEMU_HARDDISK_a8ef373d-acca-493f-badf-3a0028b34dd0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8ef373d-acca-493f-badf-3a0028b34dd0-part16', 'scsi-SQEMU_QEMU_HARDDISK_a8ef373d-acca-493f-badf-3a0028b34dd0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511925 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-03-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 00:58:00.511930 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.511933 | orchestrator | 2026-03-05 00:58:00.511942 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-05 00:58:00.511946 | orchestrator | Thursday 05 March 2026 00:46:19 +0000 (0:00:01.943) 0:00:44.192 ******** 2026-03-05 00:58:00.511950 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.511955 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.511958 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.511962 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.511966 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.511970 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.511973 | orchestrator | 2026-03-05 00:58:00.511977 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-05 00:58:00.511981 | orchestrator | Thursday 05 March 2026 00:46:20 +0000 (0:00:01.204) 0:00:45.396 ******** 2026-03-05 00:58:00.511985 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.511988 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.511992 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.511996 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.512000 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.512003 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.512007 | orchestrator | 2026-03-05 00:58:00.512011 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-05 00:58:00.512014 | orchestrator | Thursday 05 March 2026 00:46:21 +0000 (0:00:00.939) 0:00:46.336 ******** 2026-03-05 00:58:00.512018 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.512022 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.512026 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.512029 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.512033 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.512037 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.512041 | orchestrator | 2026-03-05 00:58:00.512044 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-05 00:58:00.512048 | orchestrator | Thursday 05 March 2026 00:46:22 +0000 (0:00:01.097) 0:00:47.434 ******** 2026-03-05 00:58:00.512052 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.512056 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.512059 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.512063 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.512067 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.512088 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.512092 | orchestrator | 2026-03-05 00:58:00.512096 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-05 00:58:00.512099 | orchestrator | Thursday 05 March 2026 00:46:23 +0000 (0:00:00.742) 0:00:48.177 ******** 2026-03-05 00:58:00.512103 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.512107 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.512111 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.512115 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.512122 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.512125 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.512129 | orchestrator | 2026-03-05 00:58:00.512133 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-05 00:58:00.512141 | orchestrator | Thursday 05 March 2026 00:46:25 +0000 (0:00:01.856) 0:00:50.034 ******** 2026-03-05 00:58:00.512145 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.512149 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.512153 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.512157 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.512160 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.512165 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.512171 | orchestrator | 2026-03-05 00:58:00.512176 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-05 00:58:00.512182 | orchestrator | Thursday 05 March 2026 00:46:27 +0000 (0:00:02.156) 0:00:52.191 ******** 2026-03-05 00:58:00.512188 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-05 00:58:00.512194 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-05 00:58:00.512200 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-05 00:58:00.512232 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-05 00:58:00.512239 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-05 00:58:00.512245 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-05 00:58:00.512252 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-05 00:58:00.512259 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-05 00:58:00.512264 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-05 00:58:00.512270 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-05 00:58:00.512275 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-05 00:58:00.512281 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-05 00:58:00.512287 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-05 00:58:00.512293 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-05 00:58:00.512300 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-05 00:58:00.512305 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-05 00:58:00.512312 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-05 00:58:00.512318 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-05 00:58:00.512324 | orchestrator | 2026-03-05 00:58:00.512331 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-05 00:58:00.512337 | orchestrator | Thursday 05 March 2026 00:46:33 +0000 (0:00:05.498) 0:00:57.689 ******** 2026-03-05 00:58:00.512344 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-05 00:58:00.512350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-05 00:58:00.512357 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-05 00:58:00.512364 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.512370 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-05 00:58:00.512378 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-05 00:58:00.512383 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-05 00:58:00.512387 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.512391 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-05 00:58:00.512403 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-05 00:58:00.512408 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-05 00:58:00.512413 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.512417 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-05 00:58:00.512421 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-05 00:58:00.512430 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-05 00:58:00.512435 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-05 00:58:00.512440 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-05 00:58:00.512444 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-05 00:58:00.512449 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.512454 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.512458 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-05 00:58:00.512464 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-05 00:58:00.512471 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-05 00:58:00.512477 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.512483 | orchestrator | 2026-03-05 00:58:00.512489 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-05 00:58:00.512495 | orchestrator | Thursday 05 March 2026 00:46:34 +0000 (0:00:01.411) 0:00:59.101 ******** 2026-03-05 00:58:00.512501 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.512507 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.512514 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.512521 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.512527 | orchestrator | 2026-03-05 00:58:00.512534 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-05 00:58:00.512542 | orchestrator | Thursday 05 March 2026 00:46:36 +0000 (0:00:01.549) 0:01:00.650 ******** 2026-03-05 00:58:00.512548 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.512555 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.512561 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.512567 | orchestrator | 2026-03-05 00:58:00.512573 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-05 00:58:00.512579 | orchestrator | Thursday 05 March 2026 00:46:36 +0000 (0:00:00.531) 0:01:01.182 ******** 2026-03-05 00:58:00.512584 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.512589 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.512594 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.512600 | orchestrator | 2026-03-05 00:58:00.512606 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-05 00:58:00.512612 | orchestrator | Thursday 05 March 2026 00:46:37 +0000 (0:00:00.544) 0:01:01.726 ******** 2026-03-05 00:58:00.512618 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.512624 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.512630 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.512636 | orchestrator | 2026-03-05 00:58:00.512642 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-05 00:58:00.512648 | orchestrator | Thursday 05 March 2026 00:46:38 +0000 (0:00:00.947) 0:01:02.674 ******** 2026-03-05 00:58:00.512654 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.512660 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.512666 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.512672 | orchestrator | 2026-03-05 00:58:00.512678 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-05 00:58:00.512685 | orchestrator | Thursday 05 March 2026 00:46:39 +0000 (0:00:01.037) 0:01:03.711 ******** 2026-03-05 00:58:00.512691 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 00:58:00.512697 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 00:58:00.512703 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 00:58:00.512708 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.512714 | orchestrator | 2026-03-05 00:58:00.512720 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-05 00:58:00.512725 | orchestrator | Thursday 05 March 2026 00:46:39 +0000 (0:00:00.786) 0:01:04.498 ******** 2026-03-05 00:58:00.512738 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 00:58:00.512746 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 00:58:00.512752 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 00:58:00.512758 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.512765 | orchestrator | 2026-03-05 00:58:00.512771 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-05 00:58:00.512777 | orchestrator | Thursday 05 March 2026 00:46:40 +0000 (0:00:00.519) 0:01:05.018 ******** 2026-03-05 00:58:00.512784 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 00:58:00.512790 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 00:58:00.512796 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 00:58:00.512802 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.512808 | orchestrator | 2026-03-05 00:58:00.512814 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-05 00:58:00.512821 | orchestrator | Thursday 05 March 2026 00:46:41 +0000 (0:00:00.911) 0:01:05.929 ******** 2026-03-05 00:58:00.512827 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.512833 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.512839 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.512846 | orchestrator | 2026-03-05 00:58:00.512852 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-05 00:58:00.512858 | orchestrator | Thursday 05 March 2026 00:46:42 +0000 (0:00:00.990) 0:01:06.920 ******** 2026-03-05 00:58:00.512864 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-05 00:58:00.512871 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-05 00:58:00.512887 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-05 00:58:00.512894 | orchestrator | 2026-03-05 00:58:00.512901 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-05 00:58:00.512908 | orchestrator | Thursday 05 March 2026 00:46:43 +0000 (0:00:01.629) 0:01:08.550 ******** 2026-03-05 00:58:00.512915 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-05 00:58:00.512921 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-05 00:58:00.512928 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-05 00:58:00.512934 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-05 00:58:00.512940 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-05 00:58:00.512947 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-05 00:58:00.512953 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-05 00:58:00.512959 | orchestrator | 2026-03-05 00:58:00.512966 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-05 00:58:00.512972 | orchestrator | Thursday 05 March 2026 00:46:44 +0000 (0:00:01.053) 0:01:09.604 ******** 2026-03-05 00:58:00.512979 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-05 00:58:00.512985 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-05 00:58:00.512994 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-05 00:58:00.513002 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-05 00:58:00.513008 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-05 00:58:00.513014 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-05 00:58:00.513020 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-05 00:58:00.513026 | orchestrator | 2026-03-05 00:58:00.513032 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-05 00:58:00.513043 | orchestrator | Thursday 05 March 2026 00:46:47 +0000 (0:00:02.067) 0:01:11.671 ******** 2026-03-05 00:58:00.513050 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.513057 | orchestrator | 2026-03-05 00:58:00.513063 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-05 00:58:00.513088 | orchestrator | Thursday 05 March 2026 00:46:48 +0000 (0:00:01.428) 0:01:13.100 ******** 2026-03-05 00:58:00.513096 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.513103 | orchestrator | 2026-03-05 00:58:00.513107 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-05 00:58:00.513111 | orchestrator | Thursday 05 March 2026 00:46:50 +0000 (0:00:01.574) 0:01:14.675 ******** 2026-03-05 00:58:00.513114 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.513118 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.513122 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.513126 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.513130 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.513133 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.513137 | orchestrator | 2026-03-05 00:58:00.513141 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-05 00:58:00.513145 | orchestrator | Thursday 05 March 2026 00:46:52 +0000 (0:00:02.041) 0:01:16.716 ******** 2026-03-05 00:58:00.513148 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.513152 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.513156 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.513160 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.513163 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.513167 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.513171 | orchestrator | 2026-03-05 00:58:00.513175 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-05 00:58:00.513179 | orchestrator | Thursday 05 March 2026 00:46:53 +0000 (0:00:01.219) 0:01:17.936 ******** 2026-03-05 00:58:00.513182 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.513186 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.513190 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.513194 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.513197 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.513201 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.513205 | orchestrator | 2026-03-05 00:58:00.513209 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-05 00:58:00.513212 | orchestrator | Thursday 05 March 2026 00:46:54 +0000 (0:00:01.365) 0:01:19.302 ******** 2026-03-05 00:58:00.513216 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.513220 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.513224 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.513227 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.513231 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.513235 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.513239 | orchestrator | 2026-03-05 00:58:00.513242 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-05 00:58:00.513246 | orchestrator | Thursday 05 March 2026 00:46:56 +0000 (0:00:01.478) 0:01:20.780 ******** 2026-03-05 00:58:00.513250 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.513254 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.513258 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.513261 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.513265 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.513276 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.513280 | orchestrator | 2026-03-05 00:58:00.513284 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-05 00:58:00.513291 | orchestrator | Thursday 05 March 2026 00:46:57 +0000 (0:00:01.691) 0:01:22.471 ******** 2026-03-05 00:58:00.513295 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.513299 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.513303 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.513307 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.513310 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.513314 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.513318 | orchestrator | 2026-03-05 00:58:00.513322 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-05 00:58:00.513326 | orchestrator | Thursday 05 March 2026 00:46:58 +0000 (0:00:00.794) 0:01:23.266 ******** 2026-03-05 00:58:00.513329 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.513333 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.513337 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.513342 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.513346 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.513351 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.513355 | orchestrator | 2026-03-05 00:58:00.513360 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-05 00:58:00.513364 | orchestrator | Thursday 05 March 2026 00:46:59 +0000 (0:00:00.994) 0:01:24.260 ******** 2026-03-05 00:58:00.513368 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.513373 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.513378 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.513382 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.513387 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.513391 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.513395 | orchestrator | 2026-03-05 00:58:00.513400 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-05 00:58:00.513404 | orchestrator | Thursday 05 March 2026 00:47:00 +0000 (0:00:01.184) 0:01:25.445 ******** 2026-03-05 00:58:00.513409 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.513413 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.513418 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.513422 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.513426 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.513431 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.513435 | orchestrator | 2026-03-05 00:58:00.513440 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-05 00:58:00.513444 | orchestrator | Thursday 05 March 2026 00:47:02 +0000 (0:00:01.448) 0:01:26.894 ******** 2026-03-05 00:58:00.513449 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.513453 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.513458 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.513462 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.513466 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.513471 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.513475 | orchestrator | 2026-03-05 00:58:00.513480 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-05 00:58:00.513484 | orchestrator | Thursday 05 March 2026 00:47:02 +0000 (0:00:00.645) 0:01:27.539 ******** 2026-03-05 00:58:00.513489 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.513493 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.513498 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.513502 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.513506 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.513511 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.513515 | orchestrator | 2026-03-05 00:58:00.513520 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-05 00:58:00.513524 | orchestrator | Thursday 05 March 2026 00:47:03 +0000 (0:00:01.055) 0:01:28.595 ******** 2026-03-05 00:58:00.513528 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.513533 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.513543 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.513547 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.513552 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.513556 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.513561 | orchestrator | 2026-03-05 00:58:00.513566 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-05 00:58:00.513570 | orchestrator | Thursday 05 March 2026 00:47:04 +0000 (0:00:00.855) 0:01:29.451 ******** 2026-03-05 00:58:00.513575 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.513579 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.513584 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.513588 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.513593 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.513597 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.513602 | orchestrator | 2026-03-05 00:58:00.513606 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-05 00:58:00.513611 | orchestrator | Thursday 05 March 2026 00:47:06 +0000 (0:00:01.183) 0:01:30.635 ******** 2026-03-05 00:58:00.513615 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.513620 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.513624 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.513629 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.513633 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.513638 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.513642 | orchestrator | 2026-03-05 00:58:00.513647 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-05 00:58:00.513651 | orchestrator | Thursday 05 March 2026 00:47:07 +0000 (0:00:01.157) 0:01:31.793 ******** 2026-03-05 00:58:00.513656 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.513660 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.513665 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.513669 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.513673 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.513678 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.513682 | orchestrator | 2026-03-05 00:58:00.513687 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-05 00:58:00.513691 | orchestrator | Thursday 05 March 2026 00:47:08 +0000 (0:00:01.464) 0:01:33.257 ******** 2026-03-05 00:58:00.513696 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.513700 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.513705 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.513709 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.513718 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.513723 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.513727 | orchestrator | 2026-03-05 00:58:00.513732 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-05 00:58:00.513737 | orchestrator | Thursday 05 March 2026 00:47:09 +0000 (0:00:00.673) 0:01:33.930 ******** 2026-03-05 00:58:00.513741 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.513746 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.513751 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.513755 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.513760 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.513763 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.513767 | orchestrator | 2026-03-05 00:58:00.513771 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-05 00:58:00.513775 | orchestrator | Thursday 05 March 2026 00:47:10 +0000 (0:00:01.184) 0:01:35.114 ******** 2026-03-05 00:58:00.513779 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.513782 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.513786 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.513790 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.513793 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.513797 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.513804 | orchestrator | 2026-03-05 00:58:00.513807 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-05 00:58:00.513811 | orchestrator | Thursday 05 March 2026 00:47:11 +0000 (0:00:01.264) 0:01:36.379 ******** 2026-03-05 00:58:00.513815 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.513819 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.513822 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.513826 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.513830 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.513834 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.513837 | orchestrator | 2026-03-05 00:58:00.513841 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-05 00:58:00.513845 | orchestrator | Thursday 05 March 2026 00:47:13 +0000 (0:00:01.953) 0:01:38.332 ******** 2026-03-05 00:58:00.513849 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.513852 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.513856 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.513860 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.513864 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.513867 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.513871 | orchestrator | 2026-03-05 00:58:00.513875 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-05 00:58:00.513909 | orchestrator | Thursday 05 March 2026 00:47:16 +0000 (0:00:02.954) 0:01:41.287 ******** 2026-03-05 00:58:00.513914 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.513918 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.513921 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.513925 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.513929 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.513933 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.513937 | orchestrator | 2026-03-05 00:58:00.513941 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-05 00:58:00.513945 | orchestrator | Thursday 05 March 2026 00:47:20 +0000 (0:00:03.448) 0:01:44.735 ******** 2026-03-05 00:58:00.513948 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.513952 | orchestrator | 2026-03-05 00:58:00.513956 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-05 00:58:00.513960 | orchestrator | Thursday 05 March 2026 00:47:21 +0000 (0:00:01.752) 0:01:46.488 ******** 2026-03-05 00:58:00.513964 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.513968 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.513972 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.513975 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.513979 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.513983 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.513987 | orchestrator | 2026-03-05 00:58:00.513991 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-05 00:58:00.513994 | orchestrator | Thursday 05 March 2026 00:47:22 +0000 (0:00:00.898) 0:01:47.387 ******** 2026-03-05 00:58:00.513998 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.514002 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.514006 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.514010 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.514013 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.514051 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.514055 | orchestrator | 2026-03-05 00:58:00.514059 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-05 00:58:00.514063 | orchestrator | Thursday 05 March 2026 00:47:23 +0000 (0:00:01.069) 0:01:48.456 ******** 2026-03-05 00:58:00.514067 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-05 00:58:00.514078 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-05 00:58:00.514086 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-05 00:58:00.514090 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-05 00:58:00.514094 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-05 00:58:00.514098 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-05 00:58:00.514101 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-05 00:58:00.514105 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-05 00:58:00.514109 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-05 00:58:00.514113 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-05 00:58:00.514122 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-05 00:58:00.514127 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-05 00:58:00.514130 | orchestrator | 2026-03-05 00:58:00.514134 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-05 00:58:00.514138 | orchestrator | Thursday 05 March 2026 00:47:25 +0000 (0:00:01.692) 0:01:50.149 ******** 2026-03-05 00:58:00.514142 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.514146 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.514149 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.514153 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.514157 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.514161 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.514164 | orchestrator | 2026-03-05 00:58:00.514168 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-05 00:58:00.514172 | orchestrator | Thursday 05 March 2026 00:47:27 +0000 (0:00:01.987) 0:01:52.136 ******** 2026-03-05 00:58:00.514176 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.514180 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.514183 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.514187 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.514191 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.514194 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.514198 | orchestrator | 2026-03-05 00:58:00.514202 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-05 00:58:00.514206 | orchestrator | Thursday 05 March 2026 00:47:28 +0000 (0:00:01.331) 0:01:53.468 ******** 2026-03-05 00:58:00.514209 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.514213 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.514217 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.514220 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.514224 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.514228 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.514232 | orchestrator | 2026-03-05 00:58:00.514235 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-05 00:58:00.514239 | orchestrator | Thursday 05 March 2026 00:47:30 +0000 (0:00:01.342) 0:01:54.811 ******** 2026-03-05 00:58:00.514243 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.514247 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.514250 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.514254 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.514258 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.514262 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.514265 | orchestrator | 2026-03-05 00:58:00.514269 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-05 00:58:00.514273 | orchestrator | Thursday 05 March 2026 00:47:31 +0000 (0:00:00.844) 0:01:55.655 ******** 2026-03-05 00:58:00.514277 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.514286 | orchestrator | 2026-03-05 00:58:00.514290 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-05 00:58:00.514294 | orchestrator | Thursday 05 March 2026 00:47:32 +0000 (0:00:01.383) 0:01:57.038 ******** 2026-03-05 00:58:00.514297 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.514301 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.514305 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.514308 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.514312 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.514316 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.514319 | orchestrator | 2026-03-05 00:58:00.514323 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-05 00:58:00.514327 | orchestrator | Thursday 05 March 2026 00:48:28 +0000 (0:00:55.688) 0:02:52.727 ******** 2026-03-05 00:58:00.514331 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-05 00:58:00.514334 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-05 00:58:00.514338 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-05 00:58:00.514342 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.514346 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-05 00:58:00.514349 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-05 00:58:00.514353 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-05 00:58:00.514357 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.514360 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-05 00:58:00.514364 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-05 00:58:00.514368 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-05 00:58:00.514372 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.514375 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-05 00:58:00.514379 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-05 00:58:00.514383 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-05 00:58:00.514387 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.514390 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-05 00:58:00.514394 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-05 00:58:00.514398 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-05 00:58:00.514401 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.514409 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-05 00:58:00.514413 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-05 00:58:00.514417 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-05 00:58:00.514420 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.514424 | orchestrator | 2026-03-05 00:58:00.514428 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-05 00:58:00.514431 | orchestrator | Thursday 05 March 2026 00:48:29 +0000 (0:00:00.967) 0:02:53.695 ******** 2026-03-05 00:58:00.514435 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.514439 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.514443 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.514446 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.514450 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.514454 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.514461 | orchestrator | 2026-03-05 00:58:00.514465 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-05 00:58:00.514469 | orchestrator | Thursday 05 March 2026 00:48:30 +0000 (0:00:01.028) 0:02:54.723 ******** 2026-03-05 00:58:00.514473 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.514476 | orchestrator | 2026-03-05 00:58:00.514480 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-05 00:58:00.514484 | orchestrator | Thursday 05 March 2026 00:48:30 +0000 (0:00:00.183) 0:02:54.907 ******** 2026-03-05 00:58:00.514487 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.514491 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.514495 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.514498 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.514502 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.514506 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.514509 | orchestrator | 2026-03-05 00:58:00.514513 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-05 00:58:00.514517 | orchestrator | Thursday 05 March 2026 00:48:31 +0000 (0:00:01.176) 0:02:56.083 ******** 2026-03-05 00:58:00.514521 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.514524 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.514528 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.514532 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.514535 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.514539 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.514543 | orchestrator | 2026-03-05 00:58:00.514546 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-05 00:58:00.514550 | orchestrator | Thursday 05 March 2026 00:48:32 +0000 (0:00:01.389) 0:02:57.473 ******** 2026-03-05 00:58:00.514554 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.514558 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.514562 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.514565 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.514569 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.514573 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.514577 | orchestrator | 2026-03-05 00:58:00.514580 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-05 00:58:00.514584 | orchestrator | Thursday 05 March 2026 00:48:34 +0000 (0:00:01.294) 0:02:58.768 ******** 2026-03-05 00:58:00.514588 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.514592 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.514595 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.514599 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.514603 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.514606 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.514610 | orchestrator | 2026-03-05 00:58:00.514614 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-05 00:58:00.514618 | orchestrator | Thursday 05 March 2026 00:48:36 +0000 (0:00:02.357) 0:03:01.125 ******** 2026-03-05 00:58:00.514621 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.514625 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.514629 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.514632 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.514636 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.514640 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.514643 | orchestrator | 2026-03-05 00:58:00.514647 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-05 00:58:00.514651 | orchestrator | Thursday 05 March 2026 00:48:37 +0000 (0:00:00.635) 0:03:01.761 ******** 2026-03-05 00:58:00.514655 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.514659 | orchestrator | 2026-03-05 00:58:00.514663 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-05 00:58:00.514670 | orchestrator | Thursday 05 March 2026 00:48:38 +0000 (0:00:01.203) 0:03:02.964 ******** 2026-03-05 00:58:00.514674 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.514677 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.514681 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.514685 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.514688 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.514692 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.514696 | orchestrator | 2026-03-05 00:58:00.514700 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-05 00:58:00.514703 | orchestrator | Thursday 05 March 2026 00:48:39 +0000 (0:00:00.872) 0:03:03.836 ******** 2026-03-05 00:58:00.514707 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.514711 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.514714 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.514718 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.514722 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.514725 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.514729 | orchestrator | 2026-03-05 00:58:00.514733 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-05 00:58:00.514737 | orchestrator | Thursday 05 March 2026 00:48:40 +0000 (0:00:00.848) 0:03:04.685 ******** 2026-03-05 00:58:00.514740 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.514744 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.514755 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.514760 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.514764 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.514768 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.514772 | orchestrator | 2026-03-05 00:58:00.514775 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-05 00:58:00.514779 | orchestrator | Thursday 05 March 2026 00:48:41 +0000 (0:00:01.618) 0:03:06.304 ******** 2026-03-05 00:58:00.514783 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.514787 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.514790 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.514794 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.514798 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.514802 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.514805 | orchestrator | 2026-03-05 00:58:00.514809 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-05 00:58:00.514813 | orchestrator | Thursday 05 March 2026 00:48:42 +0000 (0:00:01.133) 0:03:07.437 ******** 2026-03-05 00:58:00.514817 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.514821 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.514824 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.514828 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.514832 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.514835 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.514839 | orchestrator | 2026-03-05 00:58:00.514843 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-05 00:58:00.514846 | orchestrator | Thursday 05 March 2026 00:48:43 +0000 (0:00:01.173) 0:03:08.611 ******** 2026-03-05 00:58:00.514850 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.514854 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.514858 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.514861 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.514865 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.514869 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.514872 | orchestrator | 2026-03-05 00:58:00.514876 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-05 00:58:00.514880 | orchestrator | Thursday 05 March 2026 00:48:44 +0000 (0:00:00.836) 0:03:09.447 ******** 2026-03-05 00:58:00.514884 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.514890 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.514894 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.514898 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.514901 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.514905 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.514909 | orchestrator | 2026-03-05 00:58:00.514913 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-05 00:58:00.514917 | orchestrator | Thursday 05 March 2026 00:48:45 +0000 (0:00:01.095) 0:03:10.543 ******** 2026-03-05 00:58:00.514920 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.514924 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.514928 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.514931 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.514935 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.514939 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.514943 | orchestrator | 2026-03-05 00:58:00.514946 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-05 00:58:00.514950 | orchestrator | Thursday 05 March 2026 00:48:46 +0000 (0:00:00.627) 0:03:11.171 ******** 2026-03-05 00:58:00.514954 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.514958 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.514962 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.514965 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.514969 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.514973 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.514976 | orchestrator | 2026-03-05 00:58:00.514980 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-05 00:58:00.514984 | orchestrator | Thursday 05 March 2026 00:48:47 +0000 (0:00:01.172) 0:03:12.343 ******** 2026-03-05 00:58:00.514988 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.514992 | orchestrator | 2026-03-05 00:58:00.514995 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-05 00:58:00.514999 | orchestrator | Thursday 05 March 2026 00:48:49 +0000 (0:00:01.582) 0:03:13.926 ******** 2026-03-05 00:58:00.515003 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-05 00:58:00.515007 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-05 00:58:00.515011 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-05 00:58:00.515015 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-05 00:58:00.515018 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-05 00:58:00.515022 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-05 00:58:00.515026 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-05 00:58:00.515030 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-05 00:58:00.515033 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-05 00:58:00.515037 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-05 00:58:00.515041 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-05 00:58:00.515045 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-05 00:58:00.515048 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-05 00:58:00.515052 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-05 00:58:00.515056 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-05 00:58:00.515060 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-05 00:58:00.515063 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-05 00:58:00.515067 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-05 00:58:00.515100 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-05 00:58:00.515104 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-05 00:58:00.515111 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-05 00:58:00.515115 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-05 00:58:00.515119 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-05 00:58:00.515123 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-05 00:58:00.515126 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-05 00:58:00.515130 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-05 00:58:00.515134 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-05 00:58:00.515137 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-05 00:58:00.515141 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-05 00:58:00.515145 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-05 00:58:00.515149 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-05 00:58:00.515152 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-05 00:58:00.515156 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-05 00:58:00.515160 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-05 00:58:00.515164 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-05 00:58:00.515167 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-05 00:58:00.515171 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-05 00:58:00.515175 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-05 00:58:00.515179 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-05 00:58:00.515183 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-05 00:58:00.515186 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-05 00:58:00.515190 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-05 00:58:00.515194 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-05 00:58:00.515198 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-05 00:58:00.515201 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-05 00:58:00.515205 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-05 00:58:00.515209 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-05 00:58:00.515213 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-05 00:58:00.515216 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-05 00:58:00.515220 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-05 00:58:00.515224 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-05 00:58:00.515227 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-05 00:58:00.515231 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-05 00:58:00.515235 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-05 00:58:00.515239 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-05 00:58:00.515242 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-05 00:58:00.515246 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-05 00:58:00.515250 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-05 00:58:00.515253 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-05 00:58:00.515257 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-05 00:58:00.515261 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-05 00:58:00.515267 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-05 00:58:00.515271 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-05 00:58:00.515274 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-05 00:58:00.515278 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-05 00:58:00.515282 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-05 00:58:00.515285 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-05 00:58:00.515289 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-05 00:58:00.515293 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-05 00:58:00.515297 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-05 00:58:00.515300 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-05 00:58:00.515304 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-05 00:58:00.515308 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-05 00:58:00.515311 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-05 00:58:00.515315 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-05 00:58:00.515319 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-05 00:58:00.515327 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-05 00:58:00.515331 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-05 00:58:00.515335 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-05 00:58:00.515339 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-05 00:58:00.515343 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-05 00:58:00.515346 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-05 00:58:00.515350 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-05 00:58:00.515354 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-05 00:58:00.515358 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-05 00:58:00.515361 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-05 00:58:00.515365 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-05 00:58:00.515369 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-05 00:58:00.515373 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-05 00:58:00.515377 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-05 00:58:00.515380 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-05 00:58:00.515384 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-05 00:58:00.515388 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-05 00:58:00.515392 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-05 00:58:00.515395 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-05 00:58:00.515399 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-05 00:58:00.515403 | orchestrator | 2026-03-05 00:58:00.515407 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-05 00:58:00.515411 | orchestrator | Thursday 05 March 2026 00:48:56 +0000 (0:00:07.426) 0:03:21.353 ******** 2026-03-05 00:58:00.515414 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.515418 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.515422 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.515426 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.515435 | orchestrator | 2026-03-05 00:58:00.515439 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-05 00:58:00.515442 | orchestrator | Thursday 05 March 2026 00:48:57 +0000 (0:00:01.144) 0:03:22.497 ******** 2026-03-05 00:58:00.515446 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-05 00:58:00.515450 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-05 00:58:00.515454 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-05 00:58:00.515458 | orchestrator | 2026-03-05 00:58:00.515462 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-05 00:58:00.515466 | orchestrator | Thursday 05 March 2026 00:48:59 +0000 (0:00:01.296) 0:03:23.793 ******** 2026-03-05 00:58:00.515469 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-05 00:58:00.515473 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-05 00:58:00.515477 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-05 00:58:00.515481 | orchestrator | 2026-03-05 00:58:00.515485 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-05 00:58:00.515489 | orchestrator | Thursday 05 March 2026 00:49:00 +0000 (0:00:01.626) 0:03:25.420 ******** 2026-03-05 00:58:00.515492 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.515496 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.515500 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.515504 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.515507 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.515511 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.515515 | orchestrator | 2026-03-05 00:58:00.515519 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-05 00:58:00.515523 | orchestrator | Thursday 05 March 2026 00:49:01 +0000 (0:00:01.083) 0:03:26.503 ******** 2026-03-05 00:58:00.515526 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.515531 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.515534 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.515538 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.515542 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.515546 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.515549 | orchestrator | 2026-03-05 00:58:00.515553 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-05 00:58:00.515557 | orchestrator | Thursday 05 March 2026 00:49:03 +0000 (0:00:01.524) 0:03:28.027 ******** 2026-03-05 00:58:00.515561 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.515565 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.515568 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.515572 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.515576 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.515580 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.515583 | orchestrator | 2026-03-05 00:58:00.515591 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-05 00:58:00.515595 | orchestrator | Thursday 05 March 2026 00:49:04 +0000 (0:00:01.148) 0:03:29.176 ******** 2026-03-05 00:58:00.515599 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.515603 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.515606 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.515610 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.515614 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.515617 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.515623 | orchestrator | 2026-03-05 00:58:00.515627 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-05 00:58:00.515631 | orchestrator | Thursday 05 March 2026 00:49:05 +0000 (0:00:01.278) 0:03:30.455 ******** 2026-03-05 00:58:00.515634 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.515638 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.515642 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.515645 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.515649 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.515653 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.515657 | orchestrator | 2026-03-05 00:58:00.515660 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-05 00:58:00.515664 | orchestrator | Thursday 05 March 2026 00:49:06 +0000 (0:00:00.811) 0:03:31.266 ******** 2026-03-05 00:58:00.515668 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.515672 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.515676 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.515679 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.515683 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.515687 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.515690 | orchestrator | 2026-03-05 00:58:00.515694 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-05 00:58:00.515698 | orchestrator | Thursday 05 March 2026 00:49:07 +0000 (0:00:01.145) 0:03:32.412 ******** 2026-03-05 00:58:00.515702 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.515706 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.515710 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.515713 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.515717 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.515721 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.515725 | orchestrator | 2026-03-05 00:58:00.515728 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-05 00:58:00.515732 | orchestrator | Thursday 05 March 2026 00:49:08 +0000 (0:00:00.779) 0:03:33.192 ******** 2026-03-05 00:58:00.515736 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.515740 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.515743 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.515747 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.515751 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.515755 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.515758 | orchestrator | 2026-03-05 00:58:00.515762 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-05 00:58:00.515766 | orchestrator | Thursday 05 March 2026 00:49:09 +0000 (0:00:01.025) 0:03:34.218 ******** 2026-03-05 00:58:00.515770 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.515774 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.515777 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.515781 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.515785 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.515789 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.515793 | orchestrator | 2026-03-05 00:58:00.515796 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-05 00:58:00.515800 | orchestrator | Thursday 05 March 2026 00:49:13 +0000 (0:00:03.575) 0:03:37.793 ******** 2026-03-05 00:58:00.515804 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.515808 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.515811 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.515815 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.515819 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.515823 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.515827 | orchestrator | 2026-03-05 00:58:00.515830 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-05 00:58:00.515837 | orchestrator | Thursday 05 March 2026 00:49:14 +0000 (0:00:01.168) 0:03:38.961 ******** 2026-03-05 00:58:00.515841 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.515845 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.515849 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.515852 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.515856 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.515860 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.515864 | orchestrator | 2026-03-05 00:58:00.515868 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-05 00:58:00.515872 | orchestrator | Thursday 05 March 2026 00:49:15 +0000 (0:00:01.177) 0:03:40.139 ******** 2026-03-05 00:58:00.515875 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.515879 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.515883 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.515887 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.515890 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.515894 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.515898 | orchestrator | 2026-03-05 00:58:00.515902 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-05 00:58:00.515906 | orchestrator | Thursday 05 March 2026 00:49:16 +0000 (0:00:01.151) 0:03:41.291 ******** 2026-03-05 00:58:00.515909 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-05 00:58:00.515913 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-05 00:58:00.515917 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-05 00:58:00.515921 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.515929 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.515934 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.515937 | orchestrator | 2026-03-05 00:58:00.515941 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-05 00:58:00.515945 | orchestrator | Thursday 05 March 2026 00:49:17 +0000 (0:00:00.912) 0:03:42.203 ******** 2026-03-05 00:58:00.515950 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-05 00:58:00.515956 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-05 00:58:00.515960 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.515964 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-05 00:58:00.515968 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-05 00:58:00.515972 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.515976 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-05 00:58:00.515982 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-05 00:58:00.515986 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.515990 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.515993 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.515997 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.516001 | orchestrator | 2026-03-05 00:58:00.516005 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-05 00:58:00.516009 | orchestrator | Thursday 05 March 2026 00:49:18 +0000 (0:00:01.211) 0:03:43.415 ******** 2026-03-05 00:58:00.516012 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.516016 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.516020 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.516024 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.516027 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.516031 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.516035 | orchestrator | 2026-03-05 00:58:00.516039 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-05 00:58:00.516043 | orchestrator | Thursday 05 March 2026 00:49:19 +0000 (0:00:00.727) 0:03:44.142 ******** 2026-03-05 00:58:00.516046 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.516050 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.516054 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.516057 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.516061 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.516065 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.516077 | orchestrator | 2026-03-05 00:58:00.516081 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-05 00:58:00.516085 | orchestrator | Thursday 05 March 2026 00:49:20 +0000 (0:00:00.997) 0:03:45.140 ******** 2026-03-05 00:58:00.516089 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.516093 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.516097 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.516100 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.516104 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.516108 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.516112 | orchestrator | 2026-03-05 00:58:00.516115 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-05 00:58:00.516119 | orchestrator | Thursday 05 March 2026 00:49:21 +0000 (0:00:00.715) 0:03:45.856 ******** 2026-03-05 00:58:00.516123 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.516127 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.516131 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.516134 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.516138 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.516142 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.516145 | orchestrator | 2026-03-05 00:58:00.516149 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-05 00:58:00.516158 | orchestrator | Thursday 05 March 2026 00:49:22 +0000 (0:00:01.003) 0:03:46.859 ******** 2026-03-05 00:58:00.516163 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.516166 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.516170 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.516174 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.516178 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.516181 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.516185 | orchestrator | 2026-03-05 00:58:00.516192 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-05 00:58:00.516195 | orchestrator | Thursday 05 March 2026 00:49:22 +0000 (0:00:00.726) 0:03:47.586 ******** 2026-03-05 00:58:00.516199 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.516203 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.516207 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.516211 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.516214 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.516218 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.516222 | orchestrator | 2026-03-05 00:58:00.516226 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-05 00:58:00.516230 | orchestrator | Thursday 05 March 2026 00:49:24 +0000 (0:00:01.090) 0:03:48.677 ******** 2026-03-05 00:58:00.516234 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 00:58:00.516237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 00:58:00.516241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 00:58:00.516245 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.516249 | orchestrator | 2026-03-05 00:58:00.516252 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-05 00:58:00.516256 | orchestrator | Thursday 05 March 2026 00:49:24 +0000 (0:00:00.484) 0:03:49.162 ******** 2026-03-05 00:58:00.516260 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 00:58:00.516264 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 00:58:00.516268 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 00:58:00.516272 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.516275 | orchestrator | 2026-03-05 00:58:00.516279 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-05 00:58:00.516283 | orchestrator | Thursday 05 March 2026 00:49:24 +0000 (0:00:00.466) 0:03:49.628 ******** 2026-03-05 00:58:00.516287 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 00:58:00.516291 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 00:58:00.516294 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 00:58:00.516298 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.516302 | orchestrator | 2026-03-05 00:58:00.516306 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-05 00:58:00.516310 | orchestrator | Thursday 05 March 2026 00:49:25 +0000 (0:00:00.544) 0:03:50.172 ******** 2026-03-05 00:58:00.516313 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.516317 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.516321 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.516325 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.516329 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.516332 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.516336 | orchestrator | 2026-03-05 00:58:00.516340 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-05 00:58:00.516344 | orchestrator | Thursday 05 March 2026 00:49:26 +0000 (0:00:00.788) 0:03:50.961 ******** 2026-03-05 00:58:00.516348 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-05 00:58:00.516351 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-05 00:58:00.516355 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-05 00:58:00.516359 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-05 00:58:00.516363 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.516366 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-05 00:58:00.516370 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.516374 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-05 00:58:00.516378 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.516381 | orchestrator | 2026-03-05 00:58:00.516385 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-05 00:58:00.516389 | orchestrator | Thursday 05 March 2026 00:49:28 +0000 (0:00:02.579) 0:03:53.540 ******** 2026-03-05 00:58:00.516395 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.516399 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.516403 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.516406 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.516410 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.516414 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.516418 | orchestrator | 2026-03-05 00:58:00.516421 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-05 00:58:00.516425 | orchestrator | Thursday 05 March 2026 00:49:32 +0000 (0:00:03.581) 0:03:57.122 ******** 2026-03-05 00:58:00.516429 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.516433 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.516437 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.516441 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.516444 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.516448 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.516452 | orchestrator | 2026-03-05 00:58:00.516456 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-05 00:58:00.516460 | orchestrator | Thursday 05 March 2026 00:49:33 +0000 (0:00:01.507) 0:03:58.629 ******** 2026-03-05 00:58:00.516463 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.516467 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.516471 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.516475 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.516478 | orchestrator | 2026-03-05 00:58:00.516482 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-05 00:58:00.516488 | orchestrator | Thursday 05 March 2026 00:49:35 +0000 (0:00:01.353) 0:03:59.983 ******** 2026-03-05 00:58:00.516494 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.516498 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.516502 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.516506 | orchestrator | 2026-03-05 00:58:00.516510 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-05 00:58:00.516514 | orchestrator | Thursday 05 March 2026 00:49:35 +0000 (0:00:00.402) 0:04:00.385 ******** 2026-03-05 00:58:00.516517 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.516521 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.516525 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.516529 | orchestrator | 2026-03-05 00:58:00.516532 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-05 00:58:00.516536 | orchestrator | Thursday 05 March 2026 00:49:37 +0000 (0:00:01.684) 0:04:02.070 ******** 2026-03-05 00:58:00.516540 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-05 00:58:00.516544 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-05 00:58:00.516548 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-05 00:58:00.516551 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.516555 | orchestrator | 2026-03-05 00:58:00.516559 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-05 00:58:00.516563 | orchestrator | Thursday 05 March 2026 00:49:38 +0000 (0:00:00.648) 0:04:02.719 ******** 2026-03-05 00:58:00.516567 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.516571 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.516574 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.516578 | orchestrator | 2026-03-05 00:58:00.516582 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-05 00:58:00.516586 | orchestrator | Thursday 05 March 2026 00:49:38 +0000 (0:00:00.380) 0:04:03.100 ******** 2026-03-05 00:58:00.516590 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.516593 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.516597 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.516604 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.516607 | orchestrator | 2026-03-05 00:58:00.516611 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-05 00:58:00.516615 | orchestrator | Thursday 05 March 2026 00:49:39 +0000 (0:00:01.075) 0:04:04.175 ******** 2026-03-05 00:58:00.516619 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 00:58:00.516623 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 00:58:00.516626 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 00:58:00.516630 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.516634 | orchestrator | 2026-03-05 00:58:00.516638 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-05 00:58:00.516642 | orchestrator | Thursday 05 March 2026 00:49:39 +0000 (0:00:00.409) 0:04:04.585 ******** 2026-03-05 00:58:00.516645 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.516649 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.516653 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.516657 | orchestrator | 2026-03-05 00:58:00.516660 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-05 00:58:00.516664 | orchestrator | Thursday 05 March 2026 00:49:40 +0000 (0:00:00.423) 0:04:05.008 ******** 2026-03-05 00:58:00.516668 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.516672 | orchestrator | 2026-03-05 00:58:00.516676 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-05 00:58:00.516679 | orchestrator | Thursday 05 March 2026 00:49:40 +0000 (0:00:00.259) 0:04:05.268 ******** 2026-03-05 00:58:00.516683 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.516687 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.516691 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.516695 | orchestrator | 2026-03-05 00:58:00.516699 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-05 00:58:00.516703 | orchestrator | Thursday 05 March 2026 00:49:41 +0000 (0:00:00.386) 0:04:05.655 ******** 2026-03-05 00:58:00.516706 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.516710 | orchestrator | 2026-03-05 00:58:00.516714 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-05 00:58:00.516718 | orchestrator | Thursday 05 March 2026 00:49:41 +0000 (0:00:00.227) 0:04:05.882 ******** 2026-03-05 00:58:00.516721 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.516725 | orchestrator | 2026-03-05 00:58:00.516729 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-05 00:58:00.516733 | orchestrator | Thursday 05 March 2026 00:49:41 +0000 (0:00:00.245) 0:04:06.128 ******** 2026-03-05 00:58:00.516737 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.516740 | orchestrator | 2026-03-05 00:58:00.516744 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-05 00:58:00.516748 | orchestrator | Thursday 05 March 2026 00:49:41 +0000 (0:00:00.389) 0:04:06.518 ******** 2026-03-05 00:58:00.516752 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.516756 | orchestrator | 2026-03-05 00:58:00.516760 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-05 00:58:00.516763 | orchestrator | Thursday 05 March 2026 00:49:42 +0000 (0:00:00.284) 0:04:06.802 ******** 2026-03-05 00:58:00.516767 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.516771 | orchestrator | 2026-03-05 00:58:00.516775 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-05 00:58:00.516778 | orchestrator | Thursday 05 March 2026 00:49:42 +0000 (0:00:00.250) 0:04:07.052 ******** 2026-03-05 00:58:00.516782 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 00:58:00.516786 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 00:58:00.516790 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 00:58:00.516793 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.516800 | orchestrator | 2026-03-05 00:58:00.516804 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-05 00:58:00.516811 | orchestrator | Thursday 05 March 2026 00:49:42 +0000 (0:00:00.486) 0:04:07.539 ******** 2026-03-05 00:58:00.516815 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.516819 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.516823 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.516827 | orchestrator | 2026-03-05 00:58:00.516830 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-05 00:58:00.516834 | orchestrator | Thursday 05 March 2026 00:49:43 +0000 (0:00:00.377) 0:04:07.917 ******** 2026-03-05 00:58:00.516838 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.516842 | orchestrator | 2026-03-05 00:58:00.516846 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-05 00:58:00.516850 | orchestrator | Thursday 05 March 2026 00:49:43 +0000 (0:00:00.238) 0:04:08.155 ******** 2026-03-05 00:58:00.516853 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.516857 | orchestrator | 2026-03-05 00:58:00.516861 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-05 00:58:00.516865 | orchestrator | Thursday 05 March 2026 00:49:43 +0000 (0:00:00.221) 0:04:08.376 ******** 2026-03-05 00:58:00.516869 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.516872 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.516876 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.516880 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.516884 | orchestrator | 2026-03-05 00:58:00.516888 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-05 00:58:00.516891 | orchestrator | Thursday 05 March 2026 00:49:45 +0000 (0:00:01.349) 0:04:09.726 ******** 2026-03-05 00:58:00.516895 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.516899 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.516903 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.516907 | orchestrator | 2026-03-05 00:58:00.516911 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-05 00:58:00.516914 | orchestrator | Thursday 05 March 2026 00:49:45 +0000 (0:00:00.546) 0:04:10.273 ******** 2026-03-05 00:58:00.516918 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.516922 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.516926 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.516930 | orchestrator | 2026-03-05 00:58:00.516933 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-05 00:58:00.516937 | orchestrator | Thursday 05 March 2026 00:49:47 +0000 (0:00:01.527) 0:04:11.800 ******** 2026-03-05 00:58:00.516941 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 00:58:00.516945 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 00:58:00.516948 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 00:58:00.516952 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.516956 | orchestrator | 2026-03-05 00:58:00.516960 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-05 00:58:00.516964 | orchestrator | Thursday 05 March 2026 00:49:48 +0000 (0:00:01.033) 0:04:12.834 ******** 2026-03-05 00:58:00.516967 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.516971 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.516975 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.516979 | orchestrator | 2026-03-05 00:58:00.516982 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-05 00:58:00.516986 | orchestrator | Thursday 05 March 2026 00:49:49 +0000 (0:00:01.007) 0:04:13.841 ******** 2026-03-05 00:58:00.516990 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.516994 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.516997 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.517003 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.517007 | orchestrator | 2026-03-05 00:58:00.517011 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-05 00:58:00.517015 | orchestrator | Thursday 05 March 2026 00:49:50 +0000 (0:00:01.487) 0:04:15.329 ******** 2026-03-05 00:58:00.517019 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.517023 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.517026 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.517030 | orchestrator | 2026-03-05 00:58:00.517034 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-05 00:58:00.517038 | orchestrator | Thursday 05 March 2026 00:49:51 +0000 (0:00:00.731) 0:04:16.060 ******** 2026-03-05 00:58:00.517042 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.517045 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.517049 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.517053 | orchestrator | 2026-03-05 00:58:00.517057 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-05 00:58:00.517060 | orchestrator | Thursday 05 March 2026 00:49:53 +0000 (0:00:01.946) 0:04:18.006 ******** 2026-03-05 00:58:00.517064 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 00:58:00.517087 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 00:58:00.517092 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 00:58:00.517096 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.517100 | orchestrator | 2026-03-05 00:58:00.517104 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-05 00:58:00.517108 | orchestrator | Thursday 05 March 2026 00:49:54 +0000 (0:00:00.857) 0:04:18.863 ******** 2026-03-05 00:58:00.517111 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.517115 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.517119 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.517123 | orchestrator | 2026-03-05 00:58:00.517126 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-05 00:58:00.517130 | orchestrator | Thursday 05 March 2026 00:49:54 +0000 (0:00:00.361) 0:04:19.225 ******** 2026-03-05 00:58:00.517134 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.517138 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.517142 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.517145 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.517149 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.517158 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.517162 | orchestrator | 2026-03-05 00:58:00.517165 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-05 00:58:00.517169 | orchestrator | Thursday 05 March 2026 00:49:55 +0000 (0:00:01.119) 0:04:20.345 ******** 2026-03-05 00:58:00.517173 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.517177 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.517181 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.517184 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.517188 | orchestrator | 2026-03-05 00:58:00.517192 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-05 00:58:00.517196 | orchestrator | Thursday 05 March 2026 00:49:56 +0000 (0:00:00.993) 0:04:21.338 ******** 2026-03-05 00:58:00.517200 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.517203 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.517207 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.517211 | orchestrator | 2026-03-05 00:58:00.517215 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-05 00:58:00.517219 | orchestrator | Thursday 05 March 2026 00:49:57 +0000 (0:00:00.674) 0:04:22.013 ******** 2026-03-05 00:58:00.517222 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.517226 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.517235 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.517239 | orchestrator | 2026-03-05 00:58:00.517243 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-05 00:58:00.517246 | orchestrator | Thursday 05 March 2026 00:49:58 +0000 (0:00:01.282) 0:04:23.295 ******** 2026-03-05 00:58:00.517250 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-05 00:58:00.517254 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-05 00:58:00.517258 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-05 00:58:00.517261 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.517265 | orchestrator | 2026-03-05 00:58:00.517269 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-05 00:58:00.517273 | orchestrator | Thursday 05 March 2026 00:49:59 +0000 (0:00:00.633) 0:04:23.928 ******** 2026-03-05 00:58:00.517276 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.517280 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.517284 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.517288 | orchestrator | 2026-03-05 00:58:00.517292 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-05 00:58:00.517295 | orchestrator | 2026-03-05 00:58:00.517299 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-05 00:58:00.517303 | orchestrator | Thursday 05 March 2026 00:50:00 +0000 (0:00:00.938) 0:04:24.867 ******** 2026-03-05 00:58:00.517307 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.517311 | orchestrator | 2026-03-05 00:58:00.517315 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-05 00:58:00.517318 | orchestrator | Thursday 05 March 2026 00:50:00 +0000 (0:00:00.709) 0:04:25.577 ******** 2026-03-05 00:58:00.517322 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.517326 | orchestrator | 2026-03-05 00:58:00.517330 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-05 00:58:00.517334 | orchestrator | Thursday 05 March 2026 00:50:01 +0000 (0:00:00.660) 0:04:26.237 ******** 2026-03-05 00:58:00.517337 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.517341 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.517345 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.517349 | orchestrator | 2026-03-05 00:58:00.517353 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-05 00:58:00.517356 | orchestrator | Thursday 05 March 2026 00:50:03 +0000 (0:00:01.500) 0:04:27.738 ******** 2026-03-05 00:58:00.517360 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.517364 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.517368 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.517371 | orchestrator | 2026-03-05 00:58:00.517375 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-05 00:58:00.517379 | orchestrator | Thursday 05 March 2026 00:50:03 +0000 (0:00:00.498) 0:04:28.237 ******** 2026-03-05 00:58:00.517383 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.517386 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.517390 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.517394 | orchestrator | 2026-03-05 00:58:00.517398 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-05 00:58:00.517401 | orchestrator | Thursday 05 March 2026 00:50:03 +0000 (0:00:00.377) 0:04:28.614 ******** 2026-03-05 00:58:00.517405 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.517409 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.517413 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.517416 | orchestrator | 2026-03-05 00:58:00.517420 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-05 00:58:00.517424 | orchestrator | Thursday 05 March 2026 00:50:04 +0000 (0:00:00.421) 0:04:29.035 ******** 2026-03-05 00:58:00.517431 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.517435 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.517438 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.517442 | orchestrator | 2026-03-05 00:58:00.517446 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-05 00:58:00.517450 | orchestrator | Thursday 05 March 2026 00:50:05 +0000 (0:00:01.579) 0:04:30.614 ******** 2026-03-05 00:58:00.517454 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.517457 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.517461 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.517465 | orchestrator | 2026-03-05 00:58:00.517469 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-05 00:58:00.517473 | orchestrator | Thursday 05 March 2026 00:50:06 +0000 (0:00:00.404) 0:04:31.019 ******** 2026-03-05 00:58:00.517489 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.517493 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.517497 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.517501 | orchestrator | 2026-03-05 00:58:00.517505 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-05 00:58:00.517509 | orchestrator | Thursday 05 March 2026 00:50:06 +0000 (0:00:00.363) 0:04:31.382 ******** 2026-03-05 00:58:00.517512 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.517516 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.517520 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.517524 | orchestrator | 2026-03-05 00:58:00.517528 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-05 00:58:00.517532 | orchestrator | Thursday 05 March 2026 00:50:07 +0000 (0:00:00.871) 0:04:32.254 ******** 2026-03-05 00:58:00.517536 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.517539 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.517543 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.517547 | orchestrator | 2026-03-05 00:58:00.517551 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-05 00:58:00.517555 | orchestrator | Thursday 05 March 2026 00:50:08 +0000 (0:00:01.301) 0:04:33.555 ******** 2026-03-05 00:58:00.517558 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.517562 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.517566 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.517570 | orchestrator | 2026-03-05 00:58:00.517574 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-05 00:58:00.517578 | orchestrator | Thursday 05 March 2026 00:50:09 +0000 (0:00:00.474) 0:04:34.030 ******** 2026-03-05 00:58:00.517582 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.517585 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.517589 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.517593 | orchestrator | 2026-03-05 00:58:00.517597 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-05 00:58:00.517601 | orchestrator | Thursday 05 March 2026 00:50:09 +0000 (0:00:00.546) 0:04:34.576 ******** 2026-03-05 00:58:00.517604 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.517608 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.517612 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.517616 | orchestrator | 2026-03-05 00:58:00.517620 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-05 00:58:00.517623 | orchestrator | Thursday 05 March 2026 00:50:10 +0000 (0:00:00.508) 0:04:35.085 ******** 2026-03-05 00:58:00.517627 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.517631 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.517635 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.517639 | orchestrator | 2026-03-05 00:58:00.517642 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-05 00:58:00.517646 | orchestrator | Thursday 05 March 2026 00:50:11 +0000 (0:00:01.137) 0:04:36.222 ******** 2026-03-05 00:58:00.517650 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.517657 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.517661 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.517664 | orchestrator | 2026-03-05 00:58:00.517668 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-05 00:58:00.517672 | orchestrator | Thursday 05 March 2026 00:50:12 +0000 (0:00:00.724) 0:04:36.947 ******** 2026-03-05 00:58:00.517676 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.517680 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.517684 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.517687 | orchestrator | 2026-03-05 00:58:00.517691 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-05 00:58:00.517695 | orchestrator | Thursday 05 March 2026 00:50:12 +0000 (0:00:00.450) 0:04:37.397 ******** 2026-03-05 00:58:00.517699 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.517703 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.517706 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.517710 | orchestrator | 2026-03-05 00:58:00.517714 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-05 00:58:00.517718 | orchestrator | Thursday 05 March 2026 00:50:13 +0000 (0:00:00.532) 0:04:37.929 ******** 2026-03-05 00:58:00.517722 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.517725 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.517729 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.517733 | orchestrator | 2026-03-05 00:58:00.517737 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-05 00:58:00.517741 | orchestrator | Thursday 05 March 2026 00:50:13 +0000 (0:00:00.538) 0:04:38.468 ******** 2026-03-05 00:58:00.517745 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.517748 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.517752 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.517756 | orchestrator | 2026-03-05 00:58:00.517760 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-05 00:58:00.517764 | orchestrator | Thursday 05 March 2026 00:50:14 +0000 (0:00:01.038) 0:04:39.507 ******** 2026-03-05 00:58:00.517768 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.517771 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.517775 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.517779 | orchestrator | 2026-03-05 00:58:00.517783 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-05 00:58:00.517787 | orchestrator | Thursday 05 March 2026 00:50:16 +0000 (0:00:01.431) 0:04:40.939 ******** 2026-03-05 00:58:00.517790 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.517794 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.517798 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.517802 | orchestrator | 2026-03-05 00:58:00.517806 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-05 00:58:00.517809 | orchestrator | Thursday 05 March 2026 00:50:16 +0000 (0:00:00.581) 0:04:41.521 ******** 2026-03-05 00:58:00.517813 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.517817 | orchestrator | 2026-03-05 00:58:00.517821 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-05 00:58:00.517825 | orchestrator | Thursday 05 March 2026 00:50:18 +0000 (0:00:01.265) 0:04:42.786 ******** 2026-03-05 00:58:00.517829 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.517833 | orchestrator | 2026-03-05 00:58:00.517845 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-05 00:58:00.517852 | orchestrator | Thursday 05 March 2026 00:50:18 +0000 (0:00:00.274) 0:04:43.060 ******** 2026-03-05 00:58:00.517862 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-05 00:58:00.517870 | orchestrator | 2026-03-05 00:58:00.517876 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-05 00:58:00.517882 | orchestrator | Thursday 05 March 2026 00:50:19 +0000 (0:00:01.308) 0:04:44.369 ******** 2026-03-05 00:58:00.517889 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.517899 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.517906 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.517912 | orchestrator | 2026-03-05 00:58:00.517920 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-05 00:58:00.517927 | orchestrator | Thursday 05 March 2026 00:50:20 +0000 (0:00:01.039) 0:04:45.408 ******** 2026-03-05 00:58:00.517933 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.517939 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.517946 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.517952 | orchestrator | 2026-03-05 00:58:00.517958 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-05 00:58:00.517966 | orchestrator | Thursday 05 March 2026 00:50:22 +0000 (0:00:01.587) 0:04:46.996 ******** 2026-03-05 00:58:00.517973 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.517977 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.517981 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.517985 | orchestrator | 2026-03-05 00:58:00.517989 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-05 00:58:00.517992 | orchestrator | Thursday 05 March 2026 00:50:25 +0000 (0:00:03.288) 0:04:50.284 ******** 2026-03-05 00:58:00.517996 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.518000 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.518003 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.518007 | orchestrator | 2026-03-05 00:58:00.518011 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-05 00:58:00.518032 | orchestrator | Thursday 05 March 2026 00:50:26 +0000 (0:00:00.929) 0:04:51.214 ******** 2026-03-05 00:58:00.518037 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.518040 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.518044 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.518048 | orchestrator | 2026-03-05 00:58:00.518052 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-05 00:58:00.518055 | orchestrator | Thursday 05 March 2026 00:50:27 +0000 (0:00:00.808) 0:04:52.022 ******** 2026-03-05 00:58:00.518059 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.518063 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.518067 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.518083 | orchestrator | 2026-03-05 00:58:00.518087 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-05 00:58:00.518091 | orchestrator | Thursday 05 March 2026 00:50:28 +0000 (0:00:00.692) 0:04:52.715 ******** 2026-03-05 00:58:00.518095 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.518099 | orchestrator | 2026-03-05 00:58:00.518102 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-05 00:58:00.518106 | orchestrator | Thursday 05 March 2026 00:50:29 +0000 (0:00:01.813) 0:04:54.528 ******** 2026-03-05 00:58:00.518110 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.518114 | orchestrator | 2026-03-05 00:58:00.518117 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-05 00:58:00.518121 | orchestrator | Thursday 05 March 2026 00:50:30 +0000 (0:00:00.754) 0:04:55.283 ******** 2026-03-05 00:58:00.518125 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-05 00:58:00.518129 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 00:58:00.518132 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 00:58:00.518136 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-05 00:58:00.518140 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-05 00:58:00.518144 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-05 00:58:00.518148 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-05 00:58:00.518151 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-05 00:58:00.518155 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-05 00:58:00.518162 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-05 00:58:00.518166 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-05 00:58:00.518170 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-05 00:58:00.518174 | orchestrator | 2026-03-05 00:58:00.518177 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-05 00:58:00.518181 | orchestrator | Thursday 05 March 2026 00:50:34 +0000 (0:00:04.032) 0:04:59.315 ******** 2026-03-05 00:58:00.518185 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.518189 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.518192 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.518196 | orchestrator | 2026-03-05 00:58:00.518200 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-05 00:58:00.518207 | orchestrator | Thursday 05 March 2026 00:50:36 +0000 (0:00:01.977) 0:05:01.292 ******** 2026-03-05 00:58:00.518213 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.518219 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.518225 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.518231 | orchestrator | 2026-03-05 00:58:00.518238 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-05 00:58:00.518243 | orchestrator | Thursday 05 March 2026 00:50:37 +0000 (0:00:00.542) 0:05:01.834 ******** 2026-03-05 00:58:00.518249 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.518255 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.518260 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.518267 | orchestrator | 2026-03-05 00:58:00.518275 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-05 00:58:00.518281 | orchestrator | Thursday 05 March 2026 00:50:38 +0000 (0:00:00.913) 0:05:02.748 ******** 2026-03-05 00:58:00.518286 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.518302 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.518310 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.518315 | orchestrator | 2026-03-05 00:58:00.518321 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-05 00:58:00.518327 | orchestrator | Thursday 05 March 2026 00:50:40 +0000 (0:00:01.994) 0:05:04.742 ******** 2026-03-05 00:58:00.518333 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.518340 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.518346 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.518350 | orchestrator | 2026-03-05 00:58:00.518353 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-05 00:58:00.518357 | orchestrator | Thursday 05 March 2026 00:50:41 +0000 (0:00:01.602) 0:05:06.345 ******** 2026-03-05 00:58:00.518361 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.518364 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.518368 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.518372 | orchestrator | 2026-03-05 00:58:00.518376 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-05 00:58:00.518379 | orchestrator | Thursday 05 March 2026 00:50:42 +0000 (0:00:00.333) 0:05:06.679 ******** 2026-03-05 00:58:00.518383 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.518387 | orchestrator | 2026-03-05 00:58:00.518391 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-05 00:58:00.518395 | orchestrator | Thursday 05 March 2026 00:50:42 +0000 (0:00:00.878) 0:05:07.558 ******** 2026-03-05 00:58:00.518398 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.518402 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.518406 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.518409 | orchestrator | 2026-03-05 00:58:00.518413 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-05 00:58:00.518417 | orchestrator | Thursday 05 March 2026 00:50:43 +0000 (0:00:00.389) 0:05:07.947 ******** 2026-03-05 00:58:00.518421 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.518428 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.518432 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.518435 | orchestrator | 2026-03-05 00:58:00.518439 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-05 00:58:00.518443 | orchestrator | Thursday 05 March 2026 00:50:43 +0000 (0:00:00.399) 0:05:08.347 ******** 2026-03-05 00:58:00.518447 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.518451 | orchestrator | 2026-03-05 00:58:00.518454 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-05 00:58:00.518458 | orchestrator | Thursday 05 March 2026 00:50:44 +0000 (0:00:00.898) 0:05:09.246 ******** 2026-03-05 00:58:00.518462 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.518466 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.518469 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.518473 | orchestrator | 2026-03-05 00:58:00.518477 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-05 00:58:00.518481 | orchestrator | Thursday 05 March 2026 00:50:46 +0000 (0:00:02.209) 0:05:11.456 ******** 2026-03-05 00:58:00.518484 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.518488 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.518492 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.518496 | orchestrator | 2026-03-05 00:58:00.518499 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-05 00:58:00.518503 | orchestrator | Thursday 05 March 2026 00:50:48 +0000 (0:00:01.216) 0:05:12.672 ******** 2026-03-05 00:58:00.518507 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.518510 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.518514 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.518518 | orchestrator | 2026-03-05 00:58:00.518522 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-05 00:58:00.518525 | orchestrator | Thursday 05 March 2026 00:50:50 +0000 (0:00:02.621) 0:05:15.293 ******** 2026-03-05 00:58:00.518529 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.518533 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.518537 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.518540 | orchestrator | 2026-03-05 00:58:00.518544 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-05 00:58:00.518548 | orchestrator | Thursday 05 March 2026 00:50:52 +0000 (0:00:02.297) 0:05:17.591 ******** 2026-03-05 00:58:00.518552 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.518556 | orchestrator | 2026-03-05 00:58:00.518559 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-05 00:58:00.518563 | orchestrator | Thursday 05 March 2026 00:50:53 +0000 (0:00:00.609) 0:05:18.201 ******** 2026-03-05 00:58:00.518567 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-05 00:58:00.518571 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.518574 | orchestrator | 2026-03-05 00:58:00.518578 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-05 00:58:00.518582 | orchestrator | Thursday 05 March 2026 00:51:15 +0000 (0:00:22.165) 0:05:40.366 ******** 2026-03-05 00:58:00.518586 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.518590 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.518593 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.518597 | orchestrator | 2026-03-05 00:58:00.518601 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-05 00:58:00.518605 | orchestrator | Thursday 05 March 2026 00:51:26 +0000 (0:00:10.768) 0:05:51.135 ******** 2026-03-05 00:58:00.518608 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.518612 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.518616 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.518620 | orchestrator | 2026-03-05 00:58:00.518626 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-05 00:58:00.518635 | orchestrator | Thursday 05 March 2026 00:51:27 +0000 (0:00:00.762) 0:05:51.897 ******** 2026-03-05 00:58:00.518641 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ea69bd88f7120b6e8ba315475a41dcc1db5137f8'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-05 00:58:00.518646 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ea69bd88f7120b6e8ba315475a41dcc1db5137f8'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-05 00:58:00.518650 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ea69bd88f7120b6e8ba315475a41dcc1db5137f8'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-05 00:58:00.518655 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ea69bd88f7120b6e8ba315475a41dcc1db5137f8'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-05 00:58:00.518659 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ea69bd88f7120b6e8ba315475a41dcc1db5137f8'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-05 00:58:00.518663 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ea69bd88f7120b6e8ba315475a41dcc1db5137f8'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__ea69bd88f7120b6e8ba315475a41dcc1db5137f8'}])  2026-03-05 00:58:00.518668 | orchestrator | 2026-03-05 00:58:00.518672 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-05 00:58:00.518676 | orchestrator | Thursday 05 March 2026 00:51:42 +0000 (0:00:15.134) 0:06:07.032 ******** 2026-03-05 00:58:00.518679 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.518683 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.518687 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.518690 | orchestrator | 2026-03-05 00:58:00.518694 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-05 00:58:00.518698 | orchestrator | Thursday 05 March 2026 00:51:42 +0000 (0:00:00.359) 0:06:07.391 ******** 2026-03-05 00:58:00.518702 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.518706 | orchestrator | 2026-03-05 00:58:00.518709 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-05 00:58:00.518713 | orchestrator | Thursday 05 March 2026 00:51:43 +0000 (0:00:00.977) 0:06:08.369 ******** 2026-03-05 00:58:00.518717 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.518721 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.518724 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.518728 | orchestrator | 2026-03-05 00:58:00.518732 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-05 00:58:00.518738 | orchestrator | Thursday 05 March 2026 00:51:44 +0000 (0:00:00.352) 0:06:08.721 ******** 2026-03-05 00:58:00.518742 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.518746 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.518750 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.518753 | orchestrator | 2026-03-05 00:58:00.518757 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-05 00:58:00.518761 | orchestrator | Thursday 05 March 2026 00:51:44 +0000 (0:00:00.402) 0:06:09.124 ******** 2026-03-05 00:58:00.518765 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-05 00:58:00.518769 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-05 00:58:00.518772 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-05 00:58:00.518776 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.518780 | orchestrator | 2026-03-05 00:58:00.518783 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-05 00:58:00.518787 | orchestrator | Thursday 05 March 2026 00:51:45 +0000 (0:00:01.406) 0:06:10.530 ******** 2026-03-05 00:58:00.518791 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.518795 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.518803 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.518807 | orchestrator | 2026-03-05 00:58:00.518811 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-05 00:58:00.518814 | orchestrator | 2026-03-05 00:58:00.518818 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-05 00:58:00.518822 | orchestrator | Thursday 05 March 2026 00:51:46 +0000 (0:00:00.662) 0:06:11.192 ******** 2026-03-05 00:58:00.518826 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.518830 | orchestrator | 2026-03-05 00:58:00.518834 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-05 00:58:00.518838 | orchestrator | Thursday 05 March 2026 00:51:47 +0000 (0:00:00.555) 0:06:11.748 ******** 2026-03-05 00:58:00.518842 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.518845 | orchestrator | 2026-03-05 00:58:00.518849 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-05 00:58:00.518853 | orchestrator | Thursday 05 March 2026 00:51:47 +0000 (0:00:00.884) 0:06:12.633 ******** 2026-03-05 00:58:00.518857 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.518860 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.518864 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.518868 | orchestrator | 2026-03-05 00:58:00.518871 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-05 00:58:00.518875 | orchestrator | Thursday 05 March 2026 00:51:48 +0000 (0:00:00.790) 0:06:13.424 ******** 2026-03-05 00:58:00.518879 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.518883 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.518886 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.518890 | orchestrator | 2026-03-05 00:58:00.518894 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-05 00:58:00.518898 | orchestrator | Thursday 05 March 2026 00:51:49 +0000 (0:00:00.434) 0:06:13.859 ******** 2026-03-05 00:58:00.518902 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.518905 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.518909 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.518913 | orchestrator | 2026-03-05 00:58:00.518917 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-05 00:58:00.518920 | orchestrator | Thursday 05 March 2026 00:51:49 +0000 (0:00:00.612) 0:06:14.471 ******** 2026-03-05 00:58:00.518924 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.518928 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.518934 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.518938 | orchestrator | 2026-03-05 00:58:00.518942 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-05 00:58:00.518946 | orchestrator | Thursday 05 March 2026 00:51:50 +0000 (0:00:00.407) 0:06:14.879 ******** 2026-03-05 00:58:00.518949 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.518953 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.518957 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.518961 | orchestrator | 2026-03-05 00:58:00.518964 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-05 00:58:00.518968 | orchestrator | Thursday 05 March 2026 00:51:51 +0000 (0:00:00.762) 0:06:15.641 ******** 2026-03-05 00:58:00.518972 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.518976 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.518979 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.518983 | orchestrator | 2026-03-05 00:58:00.518987 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-05 00:58:00.518991 | orchestrator | Thursday 05 March 2026 00:51:51 +0000 (0:00:00.421) 0:06:16.062 ******** 2026-03-05 00:58:00.518994 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.518998 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.519002 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.519005 | orchestrator | 2026-03-05 00:58:00.519009 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-05 00:58:00.519013 | orchestrator | Thursday 05 March 2026 00:51:52 +0000 (0:00:00.654) 0:06:16.717 ******** 2026-03-05 00:58:00.519017 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.519020 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.519024 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.519028 | orchestrator | 2026-03-05 00:58:00.519032 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-05 00:58:00.519036 | orchestrator | Thursday 05 March 2026 00:51:52 +0000 (0:00:00.819) 0:06:17.536 ******** 2026-03-05 00:58:00.519039 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.519043 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.519047 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.519050 | orchestrator | 2026-03-05 00:58:00.519054 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-05 00:58:00.519058 | orchestrator | Thursday 05 March 2026 00:51:53 +0000 (0:00:00.794) 0:06:18.331 ******** 2026-03-05 00:58:00.519062 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.519066 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.519092 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.519097 | orchestrator | 2026-03-05 00:58:00.519100 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-05 00:58:00.519104 | orchestrator | Thursday 05 March 2026 00:51:54 +0000 (0:00:00.388) 0:06:18.719 ******** 2026-03-05 00:58:00.519108 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.519112 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.519116 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.519119 | orchestrator | 2026-03-05 00:58:00.519123 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-05 00:58:00.519127 | orchestrator | Thursday 05 March 2026 00:51:54 +0000 (0:00:00.699) 0:06:19.419 ******** 2026-03-05 00:58:00.519131 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.519135 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.519138 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.519142 | orchestrator | 2026-03-05 00:58:00.519146 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-05 00:58:00.519152 | orchestrator | Thursday 05 March 2026 00:51:55 +0000 (0:00:00.376) 0:06:19.795 ******** 2026-03-05 00:58:00.519159 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.519163 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.519167 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.519170 | orchestrator | 2026-03-05 00:58:00.519174 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-05 00:58:00.519182 | orchestrator | Thursday 05 March 2026 00:51:55 +0000 (0:00:00.400) 0:06:20.196 ******** 2026-03-05 00:58:00.519186 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.519190 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.519194 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.519197 | orchestrator | 2026-03-05 00:58:00.519201 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-05 00:58:00.519205 | orchestrator | Thursday 05 March 2026 00:51:55 +0000 (0:00:00.342) 0:06:20.539 ******** 2026-03-05 00:58:00.519209 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.519213 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.519216 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.519220 | orchestrator | 2026-03-05 00:58:00.519224 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-05 00:58:00.519228 | orchestrator | Thursday 05 March 2026 00:51:56 +0000 (0:00:00.347) 0:06:20.887 ******** 2026-03-05 00:58:00.519231 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.519235 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.519239 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.519243 | orchestrator | 2026-03-05 00:58:00.519246 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-05 00:58:00.519250 | orchestrator | Thursday 05 March 2026 00:51:56 +0000 (0:00:00.697) 0:06:21.585 ******** 2026-03-05 00:58:00.519254 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.519258 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.519262 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.519265 | orchestrator | 2026-03-05 00:58:00.519269 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-05 00:58:00.519273 | orchestrator | Thursday 05 March 2026 00:51:57 +0000 (0:00:00.399) 0:06:21.984 ******** 2026-03-05 00:58:00.519277 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.519281 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.519284 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.519288 | orchestrator | 2026-03-05 00:58:00.519292 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-05 00:58:00.519296 | orchestrator | Thursday 05 March 2026 00:51:57 +0000 (0:00:00.366) 0:06:22.350 ******** 2026-03-05 00:58:00.519300 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.519303 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.519307 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.519311 | orchestrator | 2026-03-05 00:58:00.519315 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-05 00:58:00.519319 | orchestrator | Thursday 05 March 2026 00:51:58 +0000 (0:00:00.888) 0:06:23.239 ******** 2026-03-05 00:58:00.519323 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-05 00:58:00.519326 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-05 00:58:00.519330 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-05 00:58:00.519334 | orchestrator | 2026-03-05 00:58:00.519338 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-05 00:58:00.519342 | orchestrator | Thursday 05 March 2026 00:51:59 +0000 (0:00:00.726) 0:06:23.965 ******** 2026-03-05 00:58:00.519345 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.519349 | orchestrator | 2026-03-05 00:58:00.519353 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-05 00:58:00.519357 | orchestrator | Thursday 05 March 2026 00:51:59 +0000 (0:00:00.585) 0:06:24.551 ******** 2026-03-05 00:58:00.519361 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.519364 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.519368 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.519372 | orchestrator | 2026-03-05 00:58:00.519376 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-05 00:58:00.519382 | orchestrator | Thursday 05 March 2026 00:52:00 +0000 (0:00:00.706) 0:06:25.258 ******** 2026-03-05 00:58:00.519386 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.519390 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.519394 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.519398 | orchestrator | 2026-03-05 00:58:00.519402 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-05 00:58:00.519405 | orchestrator | Thursday 05 March 2026 00:52:01 +0000 (0:00:00.651) 0:06:25.909 ******** 2026-03-05 00:58:00.519409 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-05 00:58:00.519413 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-05 00:58:00.519417 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-05 00:58:00.519421 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-05 00:58:00.519424 | orchestrator | 2026-03-05 00:58:00.519428 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-05 00:58:00.519432 | orchestrator | Thursday 05 March 2026 00:52:11 +0000 (0:00:10.098) 0:06:36.008 ******** 2026-03-05 00:58:00.519436 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.519439 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.519443 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.519447 | orchestrator | 2026-03-05 00:58:00.519451 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-05 00:58:00.519455 | orchestrator | Thursday 05 March 2026 00:52:11 +0000 (0:00:00.424) 0:06:36.432 ******** 2026-03-05 00:58:00.519459 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-05 00:58:00.519462 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-05 00:58:00.519466 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-05 00:58:00.519470 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-05 00:58:00.519474 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 00:58:00.519481 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 00:58:00.519485 | orchestrator | 2026-03-05 00:58:00.519489 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-05 00:58:00.519493 | orchestrator | Thursday 05 March 2026 00:52:14 +0000 (0:00:02.209) 0:06:38.642 ******** 2026-03-05 00:58:00.519497 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-05 00:58:00.519501 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-05 00:58:00.519504 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-05 00:58:00.519508 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-05 00:58:00.519512 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-05 00:58:00.519516 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-05 00:58:00.519519 | orchestrator | 2026-03-05 00:58:00.519523 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-05 00:58:00.519527 | orchestrator | Thursday 05 March 2026 00:52:15 +0000 (0:00:01.307) 0:06:39.950 ******** 2026-03-05 00:58:00.519531 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.519535 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.519538 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.519542 | orchestrator | 2026-03-05 00:58:00.519546 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-05 00:58:00.519550 | orchestrator | Thursday 05 March 2026 00:52:16 +0000 (0:00:01.084) 0:06:41.035 ******** 2026-03-05 00:58:00.519554 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.519558 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.519561 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.519565 | orchestrator | 2026-03-05 00:58:00.519569 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-05 00:58:00.519573 | orchestrator | Thursday 05 March 2026 00:52:16 +0000 (0:00:00.359) 0:06:41.394 ******** 2026-03-05 00:58:00.519576 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.519583 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.519587 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.519591 | orchestrator | 2026-03-05 00:58:00.519594 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-05 00:58:00.519598 | orchestrator | Thursday 05 March 2026 00:52:17 +0000 (0:00:00.346) 0:06:41.740 ******** 2026-03-05 00:58:00.519602 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.519606 | orchestrator | 2026-03-05 00:58:00.519610 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-05 00:58:00.519614 | orchestrator | Thursday 05 March 2026 00:52:18 +0000 (0:00:01.084) 0:06:42.825 ******** 2026-03-05 00:58:00.519617 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.519621 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.519625 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.519629 | orchestrator | 2026-03-05 00:58:00.519632 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-05 00:58:00.519636 | orchestrator | Thursday 05 March 2026 00:52:18 +0000 (0:00:00.480) 0:06:43.306 ******** 2026-03-05 00:58:00.519640 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.519644 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.519648 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.519651 | orchestrator | 2026-03-05 00:58:00.519655 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-05 00:58:00.519659 | orchestrator | Thursday 05 March 2026 00:52:19 +0000 (0:00:00.465) 0:06:43.771 ******** 2026-03-05 00:58:00.519663 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.519667 | orchestrator | 2026-03-05 00:58:00.519671 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-05 00:58:00.519674 | orchestrator | Thursday 05 March 2026 00:52:20 +0000 (0:00:00.931) 0:06:44.703 ******** 2026-03-05 00:58:00.519678 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.519682 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.519686 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.519689 | orchestrator | 2026-03-05 00:58:00.519693 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-05 00:58:00.519697 | orchestrator | Thursday 05 March 2026 00:52:21 +0000 (0:00:01.337) 0:06:46.040 ******** 2026-03-05 00:58:00.519701 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.519705 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.519708 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.519712 | orchestrator | 2026-03-05 00:58:00.519716 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-05 00:58:00.519720 | orchestrator | Thursday 05 March 2026 00:52:22 +0000 (0:00:01.153) 0:06:47.194 ******** 2026-03-05 00:58:00.519724 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.519728 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.519731 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.519735 | orchestrator | 2026-03-05 00:58:00.519739 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-05 00:58:00.519743 | orchestrator | Thursday 05 March 2026 00:52:24 +0000 (0:00:01.790) 0:06:48.984 ******** 2026-03-05 00:58:00.519747 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.519750 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.519754 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.519758 | orchestrator | 2026-03-05 00:58:00.519762 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-05 00:58:00.519766 | orchestrator | Thursday 05 March 2026 00:52:26 +0000 (0:00:02.295) 0:06:51.279 ******** 2026-03-05 00:58:00.519769 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.519773 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.519777 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-05 00:58:00.519783 | orchestrator | 2026-03-05 00:58:00.519787 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-05 00:58:00.519791 | orchestrator | Thursday 05 March 2026 00:52:27 +0000 (0:00:00.513) 0:06:51.793 ******** 2026-03-05 00:58:00.519861 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-05 00:58:00.519867 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-05 00:58:00.519871 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-05 00:58:00.519874 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-05 00:58:00.519878 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-05 00:58:00.519882 | orchestrator | 2026-03-05 00:58:00.519886 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-05 00:58:00.519890 | orchestrator | Thursday 05 March 2026 00:52:51 +0000 (0:00:24.522) 0:07:16.315 ******** 2026-03-05 00:58:00.519893 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-05 00:58:00.519897 | orchestrator | 2026-03-05 00:58:00.519901 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-05 00:58:00.519905 | orchestrator | Thursday 05 March 2026 00:52:52 +0000 (0:00:01.290) 0:07:17.606 ******** 2026-03-05 00:58:00.519909 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.519912 | orchestrator | 2026-03-05 00:58:00.519916 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-05 00:58:00.519920 | orchestrator | Thursday 05 March 2026 00:52:53 +0000 (0:00:00.355) 0:07:17.961 ******** 2026-03-05 00:58:00.519924 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.519927 | orchestrator | 2026-03-05 00:58:00.519931 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-05 00:58:00.519935 | orchestrator | Thursday 05 March 2026 00:52:53 +0000 (0:00:00.162) 0:07:18.124 ******** 2026-03-05 00:58:00.519939 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-05 00:58:00.519943 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-05 00:58:00.519946 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-05 00:58:00.519950 | orchestrator | 2026-03-05 00:58:00.519954 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-05 00:58:00.519958 | orchestrator | Thursday 05 March 2026 00:53:00 +0000 (0:00:06.937) 0:07:25.061 ******** 2026-03-05 00:58:00.519962 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-05 00:58:00.519965 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-05 00:58:00.519969 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-05 00:58:00.519973 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-05 00:58:00.519977 | orchestrator | 2026-03-05 00:58:00.519981 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-05 00:58:00.519984 | orchestrator | Thursday 05 March 2026 00:53:06 +0000 (0:00:05.982) 0:07:31.044 ******** 2026-03-05 00:58:00.519988 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.519992 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.519996 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.519999 | orchestrator | 2026-03-05 00:58:00.520003 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-05 00:58:00.520007 | orchestrator | Thursday 05 March 2026 00:53:07 +0000 (0:00:00.759) 0:07:31.803 ******** 2026-03-05 00:58:00.520011 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.520015 | orchestrator | 2026-03-05 00:58:00.520018 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-05 00:58:00.520025 | orchestrator | Thursday 05 March 2026 00:53:08 +0000 (0:00:01.215) 0:07:33.019 ******** 2026-03-05 00:58:00.520029 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.520033 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.520036 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.520040 | orchestrator | 2026-03-05 00:58:00.520044 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-05 00:58:00.520048 | orchestrator | Thursday 05 March 2026 00:53:08 +0000 (0:00:00.414) 0:07:33.433 ******** 2026-03-05 00:58:00.520051 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.520055 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.520059 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.520063 | orchestrator | 2026-03-05 00:58:00.520066 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-05 00:58:00.520078 | orchestrator | Thursday 05 March 2026 00:53:10 +0000 (0:00:01.431) 0:07:34.865 ******** 2026-03-05 00:58:00.520082 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-05 00:58:00.520086 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-05 00:58:00.520089 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-05 00:58:00.520093 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.520097 | orchestrator | 2026-03-05 00:58:00.520101 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-05 00:58:00.520105 | orchestrator | Thursday 05 March 2026 00:53:11 +0000 (0:00:01.334) 0:07:36.199 ******** 2026-03-05 00:58:00.520108 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.520112 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.520116 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.520120 | orchestrator | 2026-03-05 00:58:00.520124 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-05 00:58:00.520127 | orchestrator | 2026-03-05 00:58:00.520131 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-05 00:58:00.520135 | orchestrator | Thursday 05 March 2026 00:53:12 +0000 (0:00:00.978) 0:07:37.178 ******** 2026-03-05 00:58:00.520139 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.520143 | orchestrator | 2026-03-05 00:58:00.520161 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-05 00:58:00.520166 | orchestrator | Thursday 05 March 2026 00:53:13 +0000 (0:00:00.617) 0:07:37.795 ******** 2026-03-05 00:58:00.520169 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.520173 | orchestrator | 2026-03-05 00:58:00.520177 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-05 00:58:00.520181 | orchestrator | Thursday 05 March 2026 00:53:14 +0000 (0:00:00.926) 0:07:38.722 ******** 2026-03-05 00:58:00.520184 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.520188 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.520192 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.520196 | orchestrator | 2026-03-05 00:58:00.520200 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-05 00:58:00.520203 | orchestrator | Thursday 05 March 2026 00:53:14 +0000 (0:00:00.399) 0:07:39.121 ******** 2026-03-05 00:58:00.520207 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.520211 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.520215 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.520218 | orchestrator | 2026-03-05 00:58:00.520222 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-05 00:58:00.520226 | orchestrator | Thursday 05 March 2026 00:53:15 +0000 (0:00:00.755) 0:07:39.877 ******** 2026-03-05 00:58:00.520230 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.520233 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.520237 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.520245 | orchestrator | 2026-03-05 00:58:00.520248 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-05 00:58:00.520252 | orchestrator | Thursday 05 March 2026 00:53:16 +0000 (0:00:00.777) 0:07:40.654 ******** 2026-03-05 00:58:00.520256 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.520260 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.520263 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.520267 | orchestrator | 2026-03-05 00:58:00.520271 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-05 00:58:00.520275 | orchestrator | Thursday 05 March 2026 00:53:17 +0000 (0:00:01.095) 0:07:41.749 ******** 2026-03-05 00:58:00.520279 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.520282 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.520286 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.520290 | orchestrator | 2026-03-05 00:58:00.520294 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-05 00:58:00.520297 | orchestrator | Thursday 05 March 2026 00:53:17 +0000 (0:00:00.467) 0:07:42.217 ******** 2026-03-05 00:58:00.520301 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.520305 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.520309 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.520312 | orchestrator | 2026-03-05 00:58:00.520316 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-05 00:58:00.520320 | orchestrator | Thursday 05 March 2026 00:53:18 +0000 (0:00:00.465) 0:07:42.683 ******** 2026-03-05 00:58:00.520324 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.520328 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.520331 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.520335 | orchestrator | 2026-03-05 00:58:00.520339 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-05 00:58:00.520343 | orchestrator | Thursday 05 March 2026 00:53:18 +0000 (0:00:00.315) 0:07:42.999 ******** 2026-03-05 00:58:00.520347 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.520350 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.520354 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.520358 | orchestrator | 2026-03-05 00:58:00.520362 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-05 00:58:00.520365 | orchestrator | Thursday 05 March 2026 00:53:19 +0000 (0:00:01.075) 0:07:44.074 ******** 2026-03-05 00:58:00.520369 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.520373 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.520377 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.520380 | orchestrator | 2026-03-05 00:58:00.520384 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-05 00:58:00.520388 | orchestrator | Thursday 05 March 2026 00:53:20 +0000 (0:00:00.789) 0:07:44.864 ******** 2026-03-05 00:58:00.520392 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.520395 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.520399 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.520403 | orchestrator | 2026-03-05 00:58:00.520407 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-05 00:58:00.520410 | orchestrator | Thursday 05 March 2026 00:53:20 +0000 (0:00:00.343) 0:07:45.207 ******** 2026-03-05 00:58:00.520414 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.520418 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.520422 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.520425 | orchestrator | 2026-03-05 00:58:00.520429 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-05 00:58:00.520433 | orchestrator | Thursday 05 March 2026 00:53:20 +0000 (0:00:00.371) 0:07:45.579 ******** 2026-03-05 00:58:00.520437 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.520441 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.520444 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.520448 | orchestrator | 2026-03-05 00:58:00.520452 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-05 00:58:00.520458 | orchestrator | Thursday 05 March 2026 00:53:21 +0000 (0:00:00.622) 0:07:46.201 ******** 2026-03-05 00:58:00.520462 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.520466 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.520470 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.520473 | orchestrator | 2026-03-05 00:58:00.520477 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-05 00:58:00.520481 | orchestrator | Thursday 05 March 2026 00:53:22 +0000 (0:00:00.458) 0:07:46.660 ******** 2026-03-05 00:58:00.520485 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.520489 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.520492 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.520496 | orchestrator | 2026-03-05 00:58:00.520500 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-05 00:58:00.520507 | orchestrator | Thursday 05 March 2026 00:53:22 +0000 (0:00:00.510) 0:07:47.171 ******** 2026-03-05 00:58:00.520511 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.520515 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.520519 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.520523 | orchestrator | 2026-03-05 00:58:00.520526 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-05 00:58:00.520530 | orchestrator | Thursday 05 March 2026 00:53:23 +0000 (0:00:00.474) 0:07:47.645 ******** 2026-03-05 00:58:00.520534 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.520538 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.520541 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.520545 | orchestrator | 2026-03-05 00:58:00.520549 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-05 00:58:00.520553 | orchestrator | Thursday 05 March 2026 00:53:23 +0000 (0:00:00.507) 0:07:48.153 ******** 2026-03-05 00:58:00.520557 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.520560 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.520564 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.520568 | orchestrator | 2026-03-05 00:58:00.520572 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-05 00:58:00.520575 | orchestrator | Thursday 05 March 2026 00:53:23 +0000 (0:00:00.404) 0:07:48.557 ******** 2026-03-05 00:58:00.520579 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.520583 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.520587 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.520590 | orchestrator | 2026-03-05 00:58:00.520594 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-05 00:58:00.520598 | orchestrator | Thursday 05 March 2026 00:53:24 +0000 (0:00:00.353) 0:07:48.911 ******** 2026-03-05 00:58:00.520602 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.520605 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.520609 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.520613 | orchestrator | 2026-03-05 00:58:00.520617 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-05 00:58:00.520620 | orchestrator | Thursday 05 March 2026 00:53:25 +0000 (0:00:00.781) 0:07:49.693 ******** 2026-03-05 00:58:00.520624 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.520628 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.520632 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.520635 | orchestrator | 2026-03-05 00:58:00.520639 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-05 00:58:00.520643 | orchestrator | Thursday 05 March 2026 00:53:25 +0000 (0:00:00.413) 0:07:50.106 ******** 2026-03-05 00:58:00.520647 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-05 00:58:00.520651 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-05 00:58:00.520654 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-05 00:58:00.520658 | orchestrator | 2026-03-05 00:58:00.520665 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-05 00:58:00.520669 | orchestrator | Thursday 05 March 2026 00:53:26 +0000 (0:00:00.697) 0:07:50.804 ******** 2026-03-05 00:58:00.520672 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.520676 | orchestrator | 2026-03-05 00:58:00.520680 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-05 00:58:00.520684 | orchestrator | Thursday 05 March 2026 00:53:26 +0000 (0:00:00.551) 0:07:51.355 ******** 2026-03-05 00:58:00.520687 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.520691 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.520695 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.520699 | orchestrator | 2026-03-05 00:58:00.520703 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-05 00:58:00.520706 | orchestrator | Thursday 05 March 2026 00:53:27 +0000 (0:00:00.494) 0:07:51.850 ******** 2026-03-05 00:58:00.520710 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.520714 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.520717 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.520721 | orchestrator | 2026-03-05 00:58:00.520725 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-05 00:58:00.520729 | orchestrator | Thursday 05 March 2026 00:53:27 +0000 (0:00:00.300) 0:07:52.150 ******** 2026-03-05 00:58:00.520735 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.520741 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.520748 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.520754 | orchestrator | 2026-03-05 00:58:00.520760 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-05 00:58:00.520766 | orchestrator | Thursday 05 March 2026 00:53:28 +0000 (0:00:00.666) 0:07:52.816 ******** 2026-03-05 00:58:00.520772 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.520778 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.520783 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.520790 | orchestrator | 2026-03-05 00:58:00.520796 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-05 00:58:00.520803 | orchestrator | Thursday 05 March 2026 00:53:28 +0000 (0:00:00.410) 0:07:53.227 ******** 2026-03-05 00:58:00.520809 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-05 00:58:00.520815 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-05 00:58:00.520821 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-05 00:58:00.520826 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-05 00:58:00.520833 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-05 00:58:00.520839 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-05 00:58:00.520851 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-05 00:58:00.520856 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-05 00:58:00.520860 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-05 00:58:00.520863 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-05 00:58:00.520867 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-05 00:58:00.520871 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-05 00:58:00.520875 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-05 00:58:00.520879 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-05 00:58:00.520886 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-05 00:58:00.520889 | orchestrator | 2026-03-05 00:58:00.520893 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-05 00:58:00.520897 | orchestrator | Thursday 05 March 2026 00:53:30 +0000 (0:00:02.372) 0:07:55.599 ******** 2026-03-05 00:58:00.520901 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.520905 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.520908 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.520912 | orchestrator | 2026-03-05 00:58:00.520916 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-05 00:58:00.520920 | orchestrator | Thursday 05 March 2026 00:53:31 +0000 (0:00:00.373) 0:07:55.973 ******** 2026-03-05 00:58:00.520923 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.520927 | orchestrator | 2026-03-05 00:58:00.520931 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-05 00:58:00.520935 | orchestrator | Thursday 05 March 2026 00:53:31 +0000 (0:00:00.608) 0:07:56.582 ******** 2026-03-05 00:58:00.520938 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-05 00:58:00.520942 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-05 00:58:00.520946 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-05 00:58:00.520950 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-05 00:58:00.520954 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-05 00:58:00.520957 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-05 00:58:00.520961 | orchestrator | 2026-03-05 00:58:00.520965 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-05 00:58:00.520969 | orchestrator | Thursday 05 March 2026 00:53:33 +0000 (0:00:01.231) 0:07:57.813 ******** 2026-03-05 00:58:00.520973 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 00:58:00.520976 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-05 00:58:00.520980 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-05 00:58:00.520984 | orchestrator | 2026-03-05 00:58:00.520988 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-05 00:58:00.520991 | orchestrator | Thursday 05 March 2026 00:53:35 +0000 (0:00:02.406) 0:08:00.219 ******** 2026-03-05 00:58:00.520995 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-05 00:58:00.520999 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-05 00:58:00.521003 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.521006 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-05 00:58:00.521010 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-05 00:58:00.521014 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.521018 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-05 00:58:00.521021 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-05 00:58:00.521025 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.521029 | orchestrator | 2026-03-05 00:58:00.521033 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-05 00:58:00.521036 | orchestrator | Thursday 05 March 2026 00:53:36 +0000 (0:00:01.137) 0:08:01.356 ******** 2026-03-05 00:58:00.521040 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-05 00:58:00.521044 | orchestrator | 2026-03-05 00:58:00.521048 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-05 00:58:00.521052 | orchestrator | Thursday 05 March 2026 00:53:38 +0000 (0:00:02.176) 0:08:03.533 ******** 2026-03-05 00:58:00.521055 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.521059 | orchestrator | 2026-03-05 00:58:00.521063 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-05 00:58:00.521083 | orchestrator | Thursday 05 March 2026 00:53:39 +0000 (0:00:01.026) 0:08:04.560 ******** 2026-03-05 00:58:00.521088 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-130794de-baff-5f0b-9c30-9a8206b73831', 'data_vg': 'ceph-130794de-baff-5f0b-9c30-9a8206b73831'}) 2026-03-05 00:58:00.521092 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15', 'data_vg': 'ceph-7f4ff93a-c4fd-5f9b-af1c-107d8e49bf15'}) 2026-03-05 00:58:00.521096 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f88409fd-5147-5194-8288-2488b5e44352', 'data_vg': 'ceph-f88409fd-5147-5194-8288-2488b5e44352'}) 2026-03-05 00:58:00.521106 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-54671a7c-dad9-563e-9508-4448c9acfc6a', 'data_vg': 'ceph-54671a7c-dad9-563e-9508-4448c9acfc6a'}) 2026-03-05 00:58:00.521110 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-56dff28b-2239-50bc-bb4f-66f9aa80ba88', 'data_vg': 'ceph-56dff28b-2239-50bc-bb4f-66f9aa80ba88'}) 2026-03-05 00:58:00.521114 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9d6733ad-9ad8-5bce-b749-e645aedee181', 'data_vg': 'ceph-9d6733ad-9ad8-5bce-b749-e645aedee181'}) 2026-03-05 00:58:00.521118 | orchestrator | 2026-03-05 00:58:00.521122 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-05 00:58:00.521125 | orchestrator | Thursday 05 March 2026 00:54:22 +0000 (0:00:42.795) 0:08:47.355 ******** 2026-03-05 00:58:00.521129 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.521133 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.521137 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.521140 | orchestrator | 2026-03-05 00:58:00.521144 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-05 00:58:00.521148 | orchestrator | Thursday 05 March 2026 00:54:23 +0000 (0:00:00.400) 0:08:47.756 ******** 2026-03-05 00:58:00.521152 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.521156 | orchestrator | 2026-03-05 00:58:00.521159 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-05 00:58:00.521163 | orchestrator | Thursday 05 March 2026 00:54:24 +0000 (0:00:00.897) 0:08:48.653 ******** 2026-03-05 00:58:00.521167 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.521171 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.521175 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.521178 | orchestrator | 2026-03-05 00:58:00.521182 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-05 00:58:00.521186 | orchestrator | Thursday 05 March 2026 00:54:24 +0000 (0:00:00.775) 0:08:49.429 ******** 2026-03-05 00:58:00.521190 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.521193 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.521197 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.521201 | orchestrator | 2026-03-05 00:58:00.521205 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-05 00:58:00.521209 | orchestrator | Thursday 05 March 2026 00:54:27 +0000 (0:00:02.607) 0:08:52.036 ******** 2026-03-05 00:58:00.521212 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.521216 | orchestrator | 2026-03-05 00:58:00.521220 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-05 00:58:00.521224 | orchestrator | Thursday 05 March 2026 00:54:28 +0000 (0:00:00.829) 0:08:52.866 ******** 2026-03-05 00:58:00.521227 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.521231 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.521235 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.521239 | orchestrator | 2026-03-05 00:58:00.521243 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-05 00:58:00.521246 | orchestrator | Thursday 05 March 2026 00:54:29 +0000 (0:00:01.215) 0:08:54.082 ******** 2026-03-05 00:58:00.521252 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.521256 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.521260 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.521264 | orchestrator | 2026-03-05 00:58:00.521267 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-05 00:58:00.521271 | orchestrator | Thursday 05 March 2026 00:54:30 +0000 (0:00:01.186) 0:08:55.269 ******** 2026-03-05 00:58:00.521275 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.521279 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.521282 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.521286 | orchestrator | 2026-03-05 00:58:00.521290 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-05 00:58:00.521294 | orchestrator | Thursday 05 March 2026 00:54:32 +0000 (0:00:01.979) 0:08:57.248 ******** 2026-03-05 00:58:00.521297 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.521301 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.521305 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.521309 | orchestrator | 2026-03-05 00:58:00.521312 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-05 00:58:00.521316 | orchestrator | Thursday 05 March 2026 00:54:33 +0000 (0:00:00.762) 0:08:58.011 ******** 2026-03-05 00:58:00.521320 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.521324 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.521327 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.521331 | orchestrator | 2026-03-05 00:58:00.521335 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-05 00:58:00.521339 | orchestrator | Thursday 05 March 2026 00:54:33 +0000 (0:00:00.341) 0:08:58.352 ******** 2026-03-05 00:58:00.521343 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-05 00:58:00.521346 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-03-05 00:58:00.521350 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-03-05 00:58:00.521354 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-03-05 00:58:00.521357 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-05 00:58:00.521361 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-05 00:58:00.521365 | orchestrator | 2026-03-05 00:58:00.521369 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-05 00:58:00.521372 | orchestrator | Thursday 05 March 2026 00:54:34 +0000 (0:00:01.155) 0:08:59.508 ******** 2026-03-05 00:58:00.521376 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-05 00:58:00.521380 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-05 00:58:00.521384 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-05 00:58:00.521388 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-05 00:58:00.521391 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-05 00:58:00.521395 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-05 00:58:00.521399 | orchestrator | 2026-03-05 00:58:00.521406 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-05 00:58:00.521410 | orchestrator | Thursday 05 March 2026 00:54:37 +0000 (0:00:02.593) 0:09:02.102 ******** 2026-03-05 00:58:00.521414 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-05 00:58:00.521418 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-05 00:58:00.521422 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-05 00:58:00.521425 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-05 00:58:00.521429 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-05 00:58:00.521433 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-05 00:58:00.521436 | orchestrator | 2026-03-05 00:58:00.521440 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-05 00:58:00.521444 | orchestrator | Thursday 05 March 2026 00:54:41 +0000 (0:00:03.937) 0:09:06.040 ******** 2026-03-05 00:58:00.521448 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.521451 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.521457 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-05 00:58:00.521461 | orchestrator | 2026-03-05 00:58:00.521465 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-05 00:58:00.521469 | orchestrator | Thursday 05 March 2026 00:54:44 +0000 (0:00:03.368) 0:09:09.408 ******** 2026-03-05 00:58:00.521473 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.521476 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.521480 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-05 00:58:00.521484 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-05 00:58:00.521488 | orchestrator | 2026-03-05 00:58:00.521491 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-05 00:58:00.521495 | orchestrator | Thursday 05 March 2026 00:54:57 +0000 (0:00:12.538) 0:09:21.947 ******** 2026-03-05 00:58:00.521499 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.521503 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.521507 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.521510 | orchestrator | 2026-03-05 00:58:00.521514 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-05 00:58:00.521518 | orchestrator | Thursday 05 March 2026 00:54:58 +0000 (0:00:01.229) 0:09:23.177 ******** 2026-03-05 00:58:00.521522 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.521525 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.521529 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.521533 | orchestrator | 2026-03-05 00:58:00.521537 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-05 00:58:00.521540 | orchestrator | Thursday 05 March 2026 00:54:58 +0000 (0:00:00.391) 0:09:23.568 ******** 2026-03-05 00:58:00.521544 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.521548 | orchestrator | 2026-03-05 00:58:00.521552 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-05 00:58:00.521556 | orchestrator | Thursday 05 March 2026 00:54:59 +0000 (0:00:00.902) 0:09:24.471 ******** 2026-03-05 00:58:00.521560 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 00:58:00.521566 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 00:58:00.521572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 00:58:00.521579 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.521588 | orchestrator | 2026-03-05 00:58:00.521596 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-05 00:58:00.521602 | orchestrator | Thursday 05 March 2026 00:55:00 +0000 (0:00:00.570) 0:09:25.041 ******** 2026-03-05 00:58:00.521607 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.521612 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.521618 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.521624 | orchestrator | 2026-03-05 00:58:00.521629 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-05 00:58:00.521635 | orchestrator | Thursday 05 March 2026 00:55:00 +0000 (0:00:00.389) 0:09:25.431 ******** 2026-03-05 00:58:00.521640 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.521646 | orchestrator | 2026-03-05 00:58:00.521651 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-05 00:58:00.521657 | orchestrator | Thursday 05 March 2026 00:55:01 +0000 (0:00:00.240) 0:09:25.672 ******** 2026-03-05 00:58:00.521663 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.521668 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.521674 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.521679 | orchestrator | 2026-03-05 00:58:00.521684 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-05 00:58:00.521690 | orchestrator | Thursday 05 March 2026 00:55:01 +0000 (0:00:00.371) 0:09:26.043 ******** 2026-03-05 00:58:00.521700 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.521706 | orchestrator | 2026-03-05 00:58:00.521712 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-05 00:58:00.521718 | orchestrator | Thursday 05 March 2026 00:55:01 +0000 (0:00:00.271) 0:09:26.315 ******** 2026-03-05 00:58:00.521723 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.521729 | orchestrator | 2026-03-05 00:58:00.521735 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-05 00:58:00.521742 | orchestrator | Thursday 05 March 2026 00:55:01 +0000 (0:00:00.287) 0:09:26.602 ******** 2026-03-05 00:58:00.521747 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.521752 | orchestrator | 2026-03-05 00:58:00.521759 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-05 00:58:00.521764 | orchestrator | Thursday 05 March 2026 00:55:02 +0000 (0:00:00.148) 0:09:26.751 ******** 2026-03-05 00:58:00.521767 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.521771 | orchestrator | 2026-03-05 00:58:00.521775 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-05 00:58:00.521779 | orchestrator | Thursday 05 March 2026 00:55:03 +0000 (0:00:00.909) 0:09:27.660 ******** 2026-03-05 00:58:00.521788 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.521792 | orchestrator | 2026-03-05 00:58:00.521796 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-05 00:58:00.521800 | orchestrator | Thursday 05 March 2026 00:55:03 +0000 (0:00:00.255) 0:09:27.916 ******** 2026-03-05 00:58:00.521803 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 00:58:00.521807 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 00:58:00.521811 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 00:58:00.521815 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.521818 | orchestrator | 2026-03-05 00:58:00.521822 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-05 00:58:00.521826 | orchestrator | Thursday 05 March 2026 00:55:03 +0000 (0:00:00.537) 0:09:28.453 ******** 2026-03-05 00:58:00.521830 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.521833 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.521837 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.521841 | orchestrator | 2026-03-05 00:58:00.521845 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-05 00:58:00.521848 | orchestrator | Thursday 05 March 2026 00:55:04 +0000 (0:00:00.387) 0:09:28.841 ******** 2026-03-05 00:58:00.521852 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.521856 | orchestrator | 2026-03-05 00:58:00.521859 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-05 00:58:00.521863 | orchestrator | Thursday 05 March 2026 00:55:04 +0000 (0:00:00.289) 0:09:29.131 ******** 2026-03-05 00:58:00.521867 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.521871 | orchestrator | 2026-03-05 00:58:00.521874 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-05 00:58:00.521878 | orchestrator | 2026-03-05 00:58:00.521882 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-05 00:58:00.521886 | orchestrator | Thursday 05 March 2026 00:55:05 +0000 (0:00:01.109) 0:09:30.241 ******** 2026-03-05 00:58:00.521890 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.521894 | orchestrator | 2026-03-05 00:58:00.521898 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-05 00:58:00.521901 | orchestrator | Thursday 05 March 2026 00:55:06 +0000 (0:00:01.335) 0:09:31.576 ******** 2026-03-05 00:58:00.521905 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.521909 | orchestrator | 2026-03-05 00:58:00.521916 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-05 00:58:00.521920 | orchestrator | Thursday 05 March 2026 00:55:08 +0000 (0:00:01.435) 0:09:33.012 ******** 2026-03-05 00:58:00.521924 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.521927 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.521931 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.521935 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.521939 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.521942 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.521946 | orchestrator | 2026-03-05 00:58:00.521950 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-05 00:58:00.521953 | orchestrator | Thursday 05 March 2026 00:55:09 +0000 (0:00:01.049) 0:09:34.061 ******** 2026-03-05 00:58:00.521957 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.521961 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.521965 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.521971 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.521977 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.521985 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.521992 | orchestrator | 2026-03-05 00:58:00.521998 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-05 00:58:00.522005 | orchestrator | Thursday 05 March 2026 00:55:10 +0000 (0:00:00.734) 0:09:34.796 ******** 2026-03-05 00:58:00.522010 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.522036 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.522042 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.522046 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.522049 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.522053 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.522057 | orchestrator | 2026-03-05 00:58:00.522061 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-05 00:58:00.522064 | orchestrator | Thursday 05 March 2026 00:55:11 +0000 (0:00:01.187) 0:09:35.984 ******** 2026-03-05 00:58:00.522098 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.522104 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.522108 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.522112 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.522116 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.522119 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.522123 | orchestrator | 2026-03-05 00:58:00.522127 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-05 00:58:00.522131 | orchestrator | Thursday 05 March 2026 00:55:12 +0000 (0:00:00.847) 0:09:36.832 ******** 2026-03-05 00:58:00.522134 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.522138 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.522142 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.522145 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.522149 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.522153 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.522157 | orchestrator | 2026-03-05 00:58:00.522161 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-05 00:58:00.522164 | orchestrator | Thursday 05 March 2026 00:55:13 +0000 (0:00:01.507) 0:09:38.339 ******** 2026-03-05 00:58:00.522168 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.522172 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.522176 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.522179 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.522183 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.522193 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.522197 | orchestrator | 2026-03-05 00:58:00.522201 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-05 00:58:00.522205 | orchestrator | Thursday 05 March 2026 00:55:14 +0000 (0:00:00.684) 0:09:39.023 ******** 2026-03-05 00:58:00.522209 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.522216 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.522220 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.522223 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.522227 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.522231 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.522235 | orchestrator | 2026-03-05 00:58:00.522238 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-05 00:58:00.522242 | orchestrator | Thursday 05 March 2026 00:55:15 +0000 (0:00:00.965) 0:09:39.989 ******** 2026-03-05 00:58:00.522246 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.522249 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.522253 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.522257 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.522261 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.522264 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.522268 | orchestrator | 2026-03-05 00:58:00.522272 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-05 00:58:00.522275 | orchestrator | Thursday 05 March 2026 00:55:16 +0000 (0:00:01.074) 0:09:41.063 ******** 2026-03-05 00:58:00.522279 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.522283 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.522286 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.522290 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.522294 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.522297 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.522301 | orchestrator | 2026-03-05 00:58:00.522305 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-05 00:58:00.522309 | orchestrator | Thursday 05 March 2026 00:55:17 +0000 (0:00:01.456) 0:09:42.520 ******** 2026-03-05 00:58:00.522312 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.522316 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.522320 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.522324 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.522327 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.522331 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.522335 | orchestrator | 2026-03-05 00:58:00.522339 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-05 00:58:00.522342 | orchestrator | Thursday 05 March 2026 00:55:18 +0000 (0:00:00.932) 0:09:43.453 ******** 2026-03-05 00:58:00.522346 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.522350 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.522353 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.522357 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.522361 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.522364 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.522368 | orchestrator | 2026-03-05 00:58:00.522372 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-05 00:58:00.522376 | orchestrator | Thursday 05 March 2026 00:55:19 +0000 (0:00:01.163) 0:09:44.616 ******** 2026-03-05 00:58:00.522380 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.522383 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.522387 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.522391 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.522394 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.522398 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.522402 | orchestrator | 2026-03-05 00:58:00.522405 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-05 00:58:00.522409 | orchestrator | Thursday 05 March 2026 00:55:20 +0000 (0:00:00.762) 0:09:45.379 ******** 2026-03-05 00:58:00.522413 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.522419 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.522428 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.522436 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.522442 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.522447 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.522458 | orchestrator | 2026-03-05 00:58:00.522464 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-05 00:58:00.522469 | orchestrator | Thursday 05 March 2026 00:55:21 +0000 (0:00:00.981) 0:09:46.361 ******** 2026-03-05 00:58:00.522476 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.522482 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.522488 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.522494 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.522500 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.522507 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.522513 | orchestrator | 2026-03-05 00:58:00.522520 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-05 00:58:00.522525 | orchestrator | Thursday 05 March 2026 00:55:22 +0000 (0:00:00.801) 0:09:47.163 ******** 2026-03-05 00:58:00.522528 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.522532 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.522536 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.522539 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.522544 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.522552 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.522561 | orchestrator | 2026-03-05 00:58:00.522568 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-05 00:58:00.522574 | orchestrator | Thursday 05 March 2026 00:55:23 +0000 (0:00:00.939) 0:09:48.102 ******** 2026-03-05 00:58:00.522580 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.522586 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.522592 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.522598 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:00.522604 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:00.522610 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:00.522616 | orchestrator | 2026-03-05 00:58:00.522621 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-05 00:58:00.522627 | orchestrator | Thursday 05 March 2026 00:55:24 +0000 (0:00:00.654) 0:09:48.757 ******** 2026-03-05 00:58:00.522633 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.522639 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.522645 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.522651 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.522664 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.522670 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.522677 | orchestrator | 2026-03-05 00:58:00.522683 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-05 00:58:00.522689 | orchestrator | Thursday 05 March 2026 00:55:25 +0000 (0:00:00.991) 0:09:49.749 ******** 2026-03-05 00:58:00.522695 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.522701 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.522707 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.522713 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.522719 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.522725 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.522731 | orchestrator | 2026-03-05 00:58:00.522738 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-05 00:58:00.522745 | orchestrator | Thursday 05 March 2026 00:55:25 +0000 (0:00:00.746) 0:09:50.495 ******** 2026-03-05 00:58:00.522751 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.522758 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.522765 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.522771 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.522778 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.522807 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.522815 | orchestrator | 2026-03-05 00:58:00.522822 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-05 00:58:00.522828 | orchestrator | Thursday 05 March 2026 00:55:27 +0000 (0:00:01.546) 0:09:52.042 ******** 2026-03-05 00:58:00.522840 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-05 00:58:00.522847 | orchestrator | 2026-03-05 00:58:00.522853 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-05 00:58:00.522859 | orchestrator | Thursday 05 March 2026 00:55:31 +0000 (0:00:04.064) 0:09:56.107 ******** 2026-03-05 00:58:00.522866 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-05 00:58:00.522872 | orchestrator | 2026-03-05 00:58:00.522888 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-05 00:58:00.522900 | orchestrator | Thursday 05 March 2026 00:55:33 +0000 (0:00:01.975) 0:09:58.083 ******** 2026-03-05 00:58:00.522906 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.522912 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.522919 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.522925 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.522932 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.522938 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.522944 | orchestrator | 2026-03-05 00:58:00.522950 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-05 00:58:00.522957 | orchestrator | Thursday 05 March 2026 00:55:35 +0000 (0:00:02.000) 0:10:00.083 ******** 2026-03-05 00:58:00.522964 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.522971 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.522977 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.522984 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.522990 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.522997 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.523003 | orchestrator | 2026-03-05 00:58:00.523010 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-05 00:58:00.523016 | orchestrator | Thursday 05 March 2026 00:55:36 +0000 (0:00:00.992) 0:10:01.076 ******** 2026-03-05 00:58:00.523024 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.523031 | orchestrator | 2026-03-05 00:58:00.523037 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-05 00:58:00.523044 | orchestrator | Thursday 05 March 2026 00:55:38 +0000 (0:00:01.573) 0:10:02.649 ******** 2026-03-05 00:58:00.523050 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.523057 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.523063 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.523079 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.523086 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.523093 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.523099 | orchestrator | 2026-03-05 00:58:00.523106 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-05 00:58:00.523112 | orchestrator | Thursday 05 March 2026 00:55:40 +0000 (0:00:02.277) 0:10:04.926 ******** 2026-03-05 00:58:00.523118 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.523125 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.523131 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.523138 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.523144 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.523150 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.523156 | orchestrator | 2026-03-05 00:58:00.523163 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-05 00:58:00.523170 | orchestrator | Thursday 05 March 2026 00:55:44 +0000 (0:00:03.782) 0:10:08.709 ******** 2026-03-05 00:58:00.523176 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:00.523183 | orchestrator | 2026-03-05 00:58:00.523189 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-05 00:58:00.523195 | orchestrator | Thursday 05 March 2026 00:55:45 +0000 (0:00:01.506) 0:10:10.215 ******** 2026-03-05 00:58:00.523207 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.523214 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.523220 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.523228 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.523232 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.523236 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.523239 | orchestrator | 2026-03-05 00:58:00.523243 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-05 00:58:00.523247 | orchestrator | Thursday 05 March 2026 00:55:46 +0000 (0:00:01.025) 0:10:11.240 ******** 2026-03-05 00:58:00.523251 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.523255 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.523258 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.523262 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:00.523273 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:00.523277 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:00.523281 | orchestrator | 2026-03-05 00:58:00.523285 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-05 00:58:00.523288 | orchestrator | Thursday 05 March 2026 00:55:48 +0000 (0:00:02.202) 0:10:13.443 ******** 2026-03-05 00:58:00.523292 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.523296 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.523300 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.523303 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:00.523307 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:00.523311 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:00.523315 | orchestrator | 2026-03-05 00:58:00.523318 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-05 00:58:00.523322 | orchestrator | 2026-03-05 00:58:00.523326 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-05 00:58:00.523330 | orchestrator | Thursday 05 March 2026 00:55:50 +0000 (0:00:01.288) 0:10:14.731 ******** 2026-03-05 00:58:00.523334 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.523339 | orchestrator | 2026-03-05 00:58:00.523345 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-05 00:58:00.523352 | orchestrator | Thursday 05 March 2026 00:55:50 +0000 (0:00:00.597) 0:10:15.329 ******** 2026-03-05 00:58:00.523358 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.523365 | orchestrator | 2026-03-05 00:58:00.523371 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-05 00:58:00.523378 | orchestrator | Thursday 05 March 2026 00:55:51 +0000 (0:00:00.883) 0:10:16.212 ******** 2026-03-05 00:58:00.523384 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.523390 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.523396 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.523403 | orchestrator | 2026-03-05 00:58:00.523409 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-05 00:58:00.523415 | orchestrator | Thursday 05 March 2026 00:55:52 +0000 (0:00:00.455) 0:10:16.667 ******** 2026-03-05 00:58:00.523422 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.523429 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.523435 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.523442 | orchestrator | 2026-03-05 00:58:00.523448 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-05 00:58:00.523455 | orchestrator | Thursday 05 March 2026 00:55:52 +0000 (0:00:00.716) 0:10:17.383 ******** 2026-03-05 00:58:00.523461 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.523467 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.523473 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.523480 | orchestrator | 2026-03-05 00:58:00.523486 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-05 00:58:00.523497 | orchestrator | Thursday 05 March 2026 00:55:53 +0000 (0:00:01.088) 0:10:18.472 ******** 2026-03-05 00:58:00.523504 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.523510 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.523516 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.523522 | orchestrator | 2026-03-05 00:58:00.523528 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-05 00:58:00.523535 | orchestrator | Thursday 05 March 2026 00:55:54 +0000 (0:00:00.752) 0:10:19.224 ******** 2026-03-05 00:58:00.523541 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.523547 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.523553 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.523559 | orchestrator | 2026-03-05 00:58:00.523566 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-05 00:58:00.523572 | orchestrator | Thursday 05 March 2026 00:55:54 +0000 (0:00:00.350) 0:10:19.575 ******** 2026-03-05 00:58:00.523579 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.523585 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.523591 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.523598 | orchestrator | 2026-03-05 00:58:00.523604 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-05 00:58:00.523610 | orchestrator | Thursday 05 March 2026 00:55:55 +0000 (0:00:00.403) 0:10:19.979 ******** 2026-03-05 00:58:00.523617 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.523623 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.523629 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.523636 | orchestrator | 2026-03-05 00:58:00.523642 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-05 00:58:00.523648 | orchestrator | Thursday 05 March 2026 00:55:56 +0000 (0:00:00.716) 0:10:20.695 ******** 2026-03-05 00:58:00.523655 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.523661 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.523668 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.523674 | orchestrator | 2026-03-05 00:58:00.523680 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-05 00:58:00.523686 | orchestrator | Thursday 05 March 2026 00:55:56 +0000 (0:00:00.759) 0:10:21.456 ******** 2026-03-05 00:58:00.523693 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.523699 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.523705 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.523712 | orchestrator | 2026-03-05 00:58:00.523719 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-05 00:58:00.523725 | orchestrator | Thursday 05 March 2026 00:55:57 +0000 (0:00:00.768) 0:10:22.225 ******** 2026-03-05 00:58:00.523731 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.523737 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.523743 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.523750 | orchestrator | 2026-03-05 00:58:00.523756 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-05 00:58:00.523763 | orchestrator | Thursday 05 March 2026 00:55:57 +0000 (0:00:00.382) 0:10:22.607 ******** 2026-03-05 00:58:00.523769 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.523775 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.523781 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.523788 | orchestrator | 2026-03-05 00:58:00.523804 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-05 00:58:00.523811 | orchestrator | Thursday 05 March 2026 00:55:58 +0000 (0:00:00.680) 0:10:23.287 ******** 2026-03-05 00:58:00.523817 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.523823 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.523829 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.523836 | orchestrator | 2026-03-05 00:58:00.523841 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-05 00:58:00.523848 | orchestrator | Thursday 05 March 2026 00:55:59 +0000 (0:00:00.366) 0:10:23.653 ******** 2026-03-05 00:58:00.523860 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.523867 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.523873 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.523880 | orchestrator | 2026-03-05 00:58:00.523886 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-05 00:58:00.523892 | orchestrator | Thursday 05 March 2026 00:55:59 +0000 (0:00:00.350) 0:10:24.004 ******** 2026-03-05 00:58:00.523898 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.523905 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.523911 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.523917 | orchestrator | 2026-03-05 00:58:00.523923 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-05 00:58:00.523930 | orchestrator | Thursday 05 March 2026 00:55:59 +0000 (0:00:00.375) 0:10:24.379 ******** 2026-03-05 00:58:00.523936 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.523942 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.523948 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.523955 | orchestrator | 2026-03-05 00:58:00.523962 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-05 00:58:00.523968 | orchestrator | Thursday 05 March 2026 00:56:00 +0000 (0:00:00.682) 0:10:25.062 ******** 2026-03-05 00:58:00.523975 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.523979 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.523983 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.523987 | orchestrator | 2026-03-05 00:58:00.523991 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-05 00:58:00.523994 | orchestrator | Thursday 05 March 2026 00:56:00 +0000 (0:00:00.335) 0:10:25.397 ******** 2026-03-05 00:58:00.523998 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.524002 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.524006 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.524009 | orchestrator | 2026-03-05 00:58:00.524013 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-05 00:58:00.524017 | orchestrator | Thursday 05 March 2026 00:56:01 +0000 (0:00:00.401) 0:10:25.799 ******** 2026-03-05 00:58:00.524021 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.524024 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.524028 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.524032 | orchestrator | 2026-03-05 00:58:00.524036 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-05 00:58:00.524039 | orchestrator | Thursday 05 March 2026 00:56:01 +0000 (0:00:00.368) 0:10:26.168 ******** 2026-03-05 00:58:00.524043 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.524047 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.524051 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.524054 | orchestrator | 2026-03-05 00:58:00.524058 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-05 00:58:00.524062 | orchestrator | Thursday 05 March 2026 00:56:02 +0000 (0:00:00.943) 0:10:27.111 ******** 2026-03-05 00:58:00.524066 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.524080 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.524084 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-05 00:58:00.524088 | orchestrator | 2026-03-05 00:58:00.524092 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-05 00:58:00.524096 | orchestrator | Thursday 05 March 2026 00:56:02 +0000 (0:00:00.425) 0:10:27.536 ******** 2026-03-05 00:58:00.524100 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-05 00:58:00.524104 | orchestrator | 2026-03-05 00:58:00.524107 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-05 00:58:00.524111 | orchestrator | Thursday 05 March 2026 00:56:05 +0000 (0:00:02.205) 0:10:29.742 ******** 2026-03-05 00:58:00.524117 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-05 00:58:00.524124 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.524128 | orchestrator | 2026-03-05 00:58:00.524132 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-05 00:58:00.524136 | orchestrator | Thursday 05 March 2026 00:56:05 +0000 (0:00:00.438) 0:10:30.180 ******** 2026-03-05 00:58:00.524141 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-05 00:58:00.524149 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-05 00:58:00.524152 | orchestrator | 2026-03-05 00:58:00.524156 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-05 00:58:00.524160 | orchestrator | Thursday 05 March 2026 00:56:13 +0000 (0:00:08.145) 0:10:38.326 ******** 2026-03-05 00:58:00.524164 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-05 00:58:00.524168 | orchestrator | 2026-03-05 00:58:00.524177 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-05 00:58:00.524181 | orchestrator | Thursday 05 March 2026 00:56:17 +0000 (0:00:03.590) 0:10:41.916 ******** 2026-03-05 00:58:00.524185 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.524189 | orchestrator | 2026-03-05 00:58:00.524193 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-05 00:58:00.524196 | orchestrator | Thursday 05 March 2026 00:56:17 +0000 (0:00:00.518) 0:10:42.435 ******** 2026-03-05 00:58:00.524200 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-05 00:58:00.524204 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-05 00:58:00.524208 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-05 00:58:00.524212 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-05 00:58:00.524216 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-05 00:58:00.524219 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-05 00:58:00.524223 | orchestrator | 2026-03-05 00:58:00.524227 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-05 00:58:00.524231 | orchestrator | Thursday 05 March 2026 00:56:18 +0000 (0:00:01.039) 0:10:43.475 ******** 2026-03-05 00:58:00.524235 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 00:58:00.524238 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-05 00:58:00.524242 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-05 00:58:00.524246 | orchestrator | 2026-03-05 00:58:00.524250 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-05 00:58:00.524254 | orchestrator | Thursday 05 March 2026 00:56:21 +0000 (0:00:02.655) 0:10:46.130 ******** 2026-03-05 00:58:00.524257 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-05 00:58:00.524261 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-05 00:58:00.524265 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.524269 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-05 00:58:00.524273 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-05 00:58:00.524277 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.524280 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-05 00:58:00.524284 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-05 00:58:00.524297 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.524304 | orchestrator | 2026-03-05 00:58:00.524310 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-05 00:58:00.524316 | orchestrator | Thursday 05 March 2026 00:56:22 +0000 (0:00:01.343) 0:10:47.474 ******** 2026-03-05 00:58:00.524324 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.524330 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.524336 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.524390 | orchestrator | 2026-03-05 00:58:00.524397 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-05 00:58:00.524404 | orchestrator | Thursday 05 March 2026 00:56:25 +0000 (0:00:02.783) 0:10:50.257 ******** 2026-03-05 00:58:00.524411 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.524417 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.524424 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.524430 | orchestrator | 2026-03-05 00:58:00.524437 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-05 00:58:00.524443 | orchestrator | Thursday 05 March 2026 00:56:25 +0000 (0:00:00.376) 0:10:50.633 ******** 2026-03-05 00:58:00.524450 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.524456 | orchestrator | 2026-03-05 00:58:00.524462 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-05 00:58:00.524469 | orchestrator | Thursday 05 March 2026 00:56:26 +0000 (0:00:00.931) 0:10:51.565 ******** 2026-03-05 00:58:00.524475 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.524482 | orchestrator | 2026-03-05 00:58:00.524488 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-05 00:58:00.524494 | orchestrator | Thursday 05 March 2026 00:56:27 +0000 (0:00:00.736) 0:10:52.301 ******** 2026-03-05 00:58:00.524500 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.524507 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.524513 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.524519 | orchestrator | 2026-03-05 00:58:00.524526 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-05 00:58:00.524532 | orchestrator | Thursday 05 March 2026 00:56:29 +0000 (0:00:01.496) 0:10:53.798 ******** 2026-03-05 00:58:00.524539 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.524543 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.524547 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.524550 | orchestrator | 2026-03-05 00:58:00.524554 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-05 00:58:00.524558 | orchestrator | Thursday 05 March 2026 00:56:30 +0000 (0:00:01.598) 0:10:55.397 ******** 2026-03-05 00:58:00.524562 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.524565 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.524569 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.524587 | orchestrator | 2026-03-05 00:58:00.524591 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-05 00:58:00.524595 | orchestrator | Thursday 05 March 2026 00:56:32 +0000 (0:00:01.945) 0:10:57.342 ******** 2026-03-05 00:58:00.524599 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.524602 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.524606 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.524610 | orchestrator | 2026-03-05 00:58:00.524620 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-05 00:58:00.524625 | orchestrator | Thursday 05 March 2026 00:56:34 +0000 (0:00:01.962) 0:10:59.304 ******** 2026-03-05 00:58:00.524628 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.524632 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.524636 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.524640 | orchestrator | 2026-03-05 00:58:00.524644 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-05 00:58:00.524652 | orchestrator | Thursday 05 March 2026 00:56:36 +0000 (0:00:01.520) 0:11:00.824 ******** 2026-03-05 00:58:00.524655 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.524659 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.524663 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.524667 | orchestrator | 2026-03-05 00:58:00.524670 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-05 00:58:00.524674 | orchestrator | Thursday 05 March 2026 00:56:36 +0000 (0:00:00.673) 0:11:01.498 ******** 2026-03-05 00:58:00.524678 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.524682 | orchestrator | 2026-03-05 00:58:00.524686 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-05 00:58:00.524690 | orchestrator | Thursday 05 March 2026 00:56:37 +0000 (0:00:00.893) 0:11:02.391 ******** 2026-03-05 00:58:00.524693 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.524697 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.524701 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.524705 | orchestrator | 2026-03-05 00:58:00.524709 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-05 00:58:00.524712 | orchestrator | Thursday 05 March 2026 00:56:38 +0000 (0:00:00.377) 0:11:02.768 ******** 2026-03-05 00:58:00.524716 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.524720 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.524724 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.524731 | orchestrator | 2026-03-05 00:58:00.524738 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-05 00:58:00.524745 | orchestrator | Thursday 05 March 2026 00:56:39 +0000 (0:00:01.277) 0:11:04.046 ******** 2026-03-05 00:58:00.524752 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 00:58:00.524759 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 00:58:00.524767 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 00:58:00.524775 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.524781 | orchestrator | 2026-03-05 00:58:00.524789 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-05 00:58:00.524794 | orchestrator | Thursday 05 March 2026 00:56:40 +0000 (0:00:01.017) 0:11:05.064 ******** 2026-03-05 00:58:00.524797 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.524801 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.524805 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.524809 | orchestrator | 2026-03-05 00:58:00.524813 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-05 00:58:00.524816 | orchestrator | 2026-03-05 00:58:00.524822 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-05 00:58:00.524828 | orchestrator | Thursday 05 March 2026 00:56:41 +0000 (0:00:00.935) 0:11:05.999 ******** 2026-03-05 00:58:00.524834 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.524841 | orchestrator | 2026-03-05 00:58:00.524848 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-05 00:58:00.524854 | orchestrator | Thursday 05 March 2026 00:56:41 +0000 (0:00:00.558) 0:11:06.558 ******** 2026-03-05 00:58:00.524860 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.524867 | orchestrator | 2026-03-05 00:58:00.524874 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-05 00:58:00.524880 | orchestrator | Thursday 05 March 2026 00:56:42 +0000 (0:00:00.851) 0:11:07.410 ******** 2026-03-05 00:58:00.524886 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.524893 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.524899 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.524909 | orchestrator | 2026-03-05 00:58:00.524915 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-05 00:58:00.524919 | orchestrator | Thursday 05 March 2026 00:56:43 +0000 (0:00:00.333) 0:11:07.743 ******** 2026-03-05 00:58:00.524922 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.524926 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.524930 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.524934 | orchestrator | 2026-03-05 00:58:00.524938 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-05 00:58:00.524941 | orchestrator | Thursday 05 March 2026 00:56:43 +0000 (0:00:00.735) 0:11:08.478 ******** 2026-03-05 00:58:00.524945 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.524949 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.524953 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.524957 | orchestrator | 2026-03-05 00:58:00.524962 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-05 00:58:00.524968 | orchestrator | Thursday 05 March 2026 00:56:44 +0000 (0:00:01.153) 0:11:09.631 ******** 2026-03-05 00:58:00.524975 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.524981 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.524988 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.524994 | orchestrator | 2026-03-05 00:58:00.525000 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-05 00:58:00.525007 | orchestrator | Thursday 05 March 2026 00:56:45 +0000 (0:00:00.796) 0:11:10.428 ******** 2026-03-05 00:58:00.525013 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.525019 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.525026 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.525032 | orchestrator | 2026-03-05 00:58:00.525038 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-05 00:58:00.525051 | orchestrator | Thursday 05 March 2026 00:56:46 +0000 (0:00:00.337) 0:11:10.766 ******** 2026-03-05 00:58:00.525058 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.525064 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.525099 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.525106 | orchestrator | 2026-03-05 00:58:00.525112 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-05 00:58:00.525119 | orchestrator | Thursday 05 March 2026 00:56:46 +0000 (0:00:00.412) 0:11:11.179 ******** 2026-03-05 00:58:00.525125 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.525131 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.525137 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.525143 | orchestrator | 2026-03-05 00:58:00.525149 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-05 00:58:00.525152 | orchestrator | Thursday 05 March 2026 00:56:47 +0000 (0:00:00.703) 0:11:11.882 ******** 2026-03-05 00:58:00.525156 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.525160 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.525164 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.525169 | orchestrator | 2026-03-05 00:58:00.525175 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-05 00:58:00.525181 | orchestrator | Thursday 05 March 2026 00:56:48 +0000 (0:00:00.906) 0:11:12.789 ******** 2026-03-05 00:58:00.525188 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.525194 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.525201 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.525207 | orchestrator | 2026-03-05 00:58:00.525213 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-05 00:58:00.525219 | orchestrator | Thursday 05 March 2026 00:56:49 +0000 (0:00:00.861) 0:11:13.651 ******** 2026-03-05 00:58:00.525226 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.525232 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.525239 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.525245 | orchestrator | 2026-03-05 00:58:00.525251 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-05 00:58:00.525262 | orchestrator | Thursday 05 March 2026 00:56:49 +0000 (0:00:00.342) 0:11:13.993 ******** 2026-03-05 00:58:00.525269 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.525275 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.525281 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.525288 | orchestrator | 2026-03-05 00:58:00.525294 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-05 00:58:00.525300 | orchestrator | Thursday 05 March 2026 00:56:49 +0000 (0:00:00.632) 0:11:14.626 ******** 2026-03-05 00:58:00.525324 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.525329 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.525333 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.525337 | orchestrator | 2026-03-05 00:58:00.525340 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-05 00:58:00.525344 | orchestrator | Thursday 05 March 2026 00:56:50 +0000 (0:00:00.471) 0:11:15.098 ******** 2026-03-05 00:58:00.525348 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.525352 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.525355 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.525360 | orchestrator | 2026-03-05 00:58:00.525366 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-05 00:58:00.525373 | orchestrator | Thursday 05 March 2026 00:56:50 +0000 (0:00:00.359) 0:11:15.457 ******** 2026-03-05 00:58:00.525379 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.525385 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.525391 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.525398 | orchestrator | 2026-03-05 00:58:00.525404 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-05 00:58:00.525411 | orchestrator | Thursday 05 March 2026 00:56:51 +0000 (0:00:00.371) 0:11:15.829 ******** 2026-03-05 00:58:00.525417 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.525423 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.525430 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.525436 | orchestrator | 2026-03-05 00:58:00.525442 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-05 00:58:00.525449 | orchestrator | Thursday 05 March 2026 00:56:51 +0000 (0:00:00.666) 0:11:16.496 ******** 2026-03-05 00:58:00.525455 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.525462 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.525468 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.525474 | orchestrator | 2026-03-05 00:58:00.525480 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-05 00:58:00.525487 | orchestrator | Thursday 05 March 2026 00:56:52 +0000 (0:00:00.345) 0:11:16.842 ******** 2026-03-05 00:58:00.525493 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.525499 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.525505 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.525511 | orchestrator | 2026-03-05 00:58:00.525518 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-05 00:58:00.525524 | orchestrator | Thursday 05 March 2026 00:56:52 +0000 (0:00:00.362) 0:11:17.204 ******** 2026-03-05 00:58:00.525530 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.525536 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.525542 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.525549 | orchestrator | 2026-03-05 00:58:00.525555 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-05 00:58:00.525562 | orchestrator | Thursday 05 March 2026 00:56:52 +0000 (0:00:00.366) 0:11:17.570 ******** 2026-03-05 00:58:00.525569 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.525575 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.525581 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.525588 | orchestrator | 2026-03-05 00:58:00.525594 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-05 00:58:00.525601 | orchestrator | Thursday 05 March 2026 00:56:53 +0000 (0:00:00.906) 0:11:18.477 ******** 2026-03-05 00:58:00.525607 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.525619 | orchestrator | 2026-03-05 00:58:00.525625 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-05 00:58:00.525632 | orchestrator | Thursday 05 March 2026 00:56:54 +0000 (0:00:00.617) 0:11:19.095 ******** 2026-03-05 00:58:00.525645 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 00:58:00.525652 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-05 00:58:00.525658 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-05 00:58:00.525664 | orchestrator | 2026-03-05 00:58:00.525671 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-05 00:58:00.525677 | orchestrator | Thursday 05 March 2026 00:56:56 +0000 (0:00:02.505) 0:11:21.600 ******** 2026-03-05 00:58:00.525684 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-05 00:58:00.525690 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-05 00:58:00.525696 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.525702 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-05 00:58:00.525708 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-05 00:58:00.525715 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.525721 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-05 00:58:00.525727 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-05 00:58:00.525733 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.525739 | orchestrator | 2026-03-05 00:58:00.525745 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-05 00:58:00.525752 | orchestrator | Thursday 05 March 2026 00:56:58 +0000 (0:00:01.734) 0:11:23.334 ******** 2026-03-05 00:58:00.525758 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.525765 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.525771 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.525777 | orchestrator | 2026-03-05 00:58:00.525784 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-05 00:58:00.525791 | orchestrator | Thursday 05 March 2026 00:56:59 +0000 (0:00:00.360) 0:11:23.695 ******** 2026-03-05 00:58:00.525798 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.525804 | orchestrator | 2026-03-05 00:58:00.525810 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-05 00:58:00.525817 | orchestrator | Thursday 05 March 2026 00:56:59 +0000 (0:00:00.629) 0:11:24.324 ******** 2026-03-05 00:58:00.525824 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-05 00:58:00.525830 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-05 00:58:00.525837 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-05 00:58:00.525843 | orchestrator | 2026-03-05 00:58:00.525849 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-05 00:58:00.525855 | orchestrator | Thursday 05 March 2026 00:57:00 +0000 (0:00:01.268) 0:11:25.593 ******** 2026-03-05 00:58:00.525861 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 00:58:00.525867 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-05 00:58:00.525874 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 00:58:00.525880 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-05 00:58:00.525892 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 00:58:00.525898 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-05 00:58:00.525903 | orchestrator | 2026-03-05 00:58:00.525909 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-05 00:58:00.525914 | orchestrator | Thursday 05 March 2026 00:57:05 +0000 (0:00:04.638) 0:11:30.231 ******** 2026-03-05 00:58:00.525920 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 00:58:00.525927 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-05 00:58:00.525932 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 00:58:00.525938 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-05 00:58:00.525944 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 00:58:00.525950 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-05 00:58:00.525956 | orchestrator | 2026-03-05 00:58:00.525962 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-05 00:58:00.525969 | orchestrator | Thursday 05 March 2026 00:57:08 +0000 (0:00:02.571) 0:11:32.803 ******** 2026-03-05 00:58:00.525975 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-05 00:58:00.525981 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.525987 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-05 00:58:00.525993 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.526000 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-05 00:58:00.526006 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.526012 | orchestrator | 2026-03-05 00:58:00.526056 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-05 00:58:00.526063 | orchestrator | Thursday 05 March 2026 00:57:09 +0000 (0:00:01.318) 0:11:34.122 ******** 2026-03-05 00:58:00.526091 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-05 00:58:00.526098 | orchestrator | 2026-03-05 00:58:00.526104 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-05 00:58:00.526111 | orchestrator | Thursday 05 March 2026 00:57:09 +0000 (0:00:00.250) 0:11:34.373 ******** 2026-03-05 00:58:00.526117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-05 00:58:00.526124 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-05 00:58:00.526130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-05 00:58:00.526136 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-05 00:58:00.526143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-05 00:58:00.526149 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.526155 | orchestrator | 2026-03-05 00:58:00.526161 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-05 00:58:00.526168 | orchestrator | Thursday 05 March 2026 00:57:11 +0000 (0:00:01.455) 0:11:35.829 ******** 2026-03-05 00:58:00.526174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-05 00:58:00.526180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-05 00:58:00.526187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-05 00:58:00.526198 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-05 00:58:00.526205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-05 00:58:00.526211 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.526218 | orchestrator | 2026-03-05 00:58:00.526224 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-05 00:58:00.526231 | orchestrator | Thursday 05 March 2026 00:57:11 +0000 (0:00:00.724) 0:11:36.553 ******** 2026-03-05 00:58:00.526237 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-05 00:58:00.526244 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-05 00:58:00.526250 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-05 00:58:00.526257 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-05 00:58:00.526263 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-05 00:58:00.526269 | orchestrator | 2026-03-05 00:58:00.526276 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-05 00:58:00.526282 | orchestrator | Thursday 05 March 2026 00:57:44 +0000 (0:00:32.641) 0:12:09.194 ******** 2026-03-05 00:58:00.526288 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.526294 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.526301 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.526307 | orchestrator | 2026-03-05 00:58:00.526313 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-05 00:58:00.526319 | orchestrator | Thursday 05 March 2026 00:57:44 +0000 (0:00:00.370) 0:12:09.565 ******** 2026-03-05 00:58:00.526325 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.526332 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.526338 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.526344 | orchestrator | 2026-03-05 00:58:00.526350 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-05 00:58:00.526357 | orchestrator | Thursday 05 March 2026 00:57:45 +0000 (0:00:00.338) 0:12:09.903 ******** 2026-03-05 00:58:00.526363 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.526369 | orchestrator | 2026-03-05 00:58:00.526376 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-05 00:58:00.526382 | orchestrator | Thursday 05 March 2026 00:57:46 +0000 (0:00:00.966) 0:12:10.869 ******** 2026-03-05 00:58:00.526388 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.526395 | orchestrator | 2026-03-05 00:58:00.526401 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-05 00:58:00.526407 | orchestrator | Thursday 05 March 2026 00:57:46 +0000 (0:00:00.598) 0:12:11.468 ******** 2026-03-05 00:58:00.526423 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.526430 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.526436 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.526442 | orchestrator | 2026-03-05 00:58:00.526449 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-05 00:58:00.526459 | orchestrator | Thursday 05 March 2026 00:57:48 +0000 (0:00:01.282) 0:12:12.751 ******** 2026-03-05 00:58:00.526469 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.526472 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.526476 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.526480 | orchestrator | 2026-03-05 00:58:00.526484 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-05 00:58:00.526487 | orchestrator | Thursday 05 March 2026 00:57:49 +0000 (0:00:01.552) 0:12:14.304 ******** 2026-03-05 00:58:00.526491 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:58:00.526495 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:58:00.526499 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:58:00.526502 | orchestrator | 2026-03-05 00:58:00.526506 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-05 00:58:00.526510 | orchestrator | Thursday 05 March 2026 00:57:51 +0000 (0:00:01.942) 0:12:16.246 ******** 2026-03-05 00:58:00.526514 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-05 00:58:00.526517 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-05 00:58:00.526521 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-05 00:58:00.526525 | orchestrator | 2026-03-05 00:58:00.526529 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-05 00:58:00.526532 | orchestrator | Thursday 05 March 2026 00:57:54 +0000 (0:00:02.861) 0:12:19.108 ******** 2026-03-05 00:58:00.526536 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.526540 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.526544 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.526547 | orchestrator | 2026-03-05 00:58:00.526551 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-05 00:58:00.526555 | orchestrator | Thursday 05 March 2026 00:57:54 +0000 (0:00:00.352) 0:12:19.461 ******** 2026-03-05 00:58:00.526559 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:58:00.526562 | orchestrator | 2026-03-05 00:58:00.526566 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-05 00:58:00.526570 | orchestrator | Thursday 05 March 2026 00:57:55 +0000 (0:00:00.531) 0:12:19.993 ******** 2026-03-05 00:58:00.526574 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.526577 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.526581 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.526585 | orchestrator | 2026-03-05 00:58:00.526589 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-05 00:58:00.526592 | orchestrator | Thursday 05 March 2026 00:57:55 +0000 (0:00:00.524) 0:12:20.517 ******** 2026-03-05 00:58:00.526596 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.526600 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:58:00.526603 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:58:00.526607 | orchestrator | 2026-03-05 00:58:00.526611 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-05 00:58:00.526615 | orchestrator | Thursday 05 March 2026 00:57:56 +0000 (0:00:00.374) 0:12:20.891 ******** 2026-03-05 00:58:00.526618 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 00:58:00.526622 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 00:58:00.526626 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 00:58:00.526630 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:58:00.526633 | orchestrator | 2026-03-05 00:58:00.526637 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-05 00:58:00.526641 | orchestrator | Thursday 05 March 2026 00:57:56 +0000 (0:00:00.564) 0:12:21.455 ******** 2026-03-05 00:58:00.526645 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:58:00.526648 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:58:00.526655 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:58:00.526658 | orchestrator | 2026-03-05 00:58:00.526662 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:58:00.526666 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-05 00:58:00.526670 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-05 00:58:00.526674 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-05 00:58:00.526677 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-05 00:58:00.526681 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-05 00:58:00.526685 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-05 00:58:00.526689 | orchestrator | 2026-03-05 00:58:00.526693 | orchestrator | 2026-03-05 00:58:00.526696 | orchestrator | 2026-03-05 00:58:00.526704 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:58:00.526708 | orchestrator | Thursday 05 March 2026 00:57:57 +0000 (0:00:00.232) 0:12:21.688 ******** 2026-03-05 00:58:00.526712 | orchestrator | =============================================================================== 2026-03-05 00:58:00.526716 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 55.69s 2026-03-05 00:58:00.526720 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 42.80s 2026-03-05 00:58:00.526723 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.64s 2026-03-05 00:58:00.526727 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.52s 2026-03-05 00:58:00.526731 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.17s 2026-03-05 00:58:00.526735 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.13s 2026-03-05 00:58:00.526738 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.54s 2026-03-05 00:58:00.526742 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.77s 2026-03-05 00:58:00.526746 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.10s 2026-03-05 00:58:00.526749 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.15s 2026-03-05 00:58:00.526753 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.43s 2026-03-05 00:58:00.526757 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.94s 2026-03-05 00:58:00.526761 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.98s 2026-03-05 00:58:00.526764 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 5.50s 2026-03-05 00:58:00.526768 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.64s 2026-03-05 00:58:00.526772 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.06s 2026-03-05 00:58:00.526776 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 4.03s 2026-03-05 00:58:00.526779 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.94s 2026-03-05 00:58:00.526783 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.78s 2026-03-05 00:58:00.526787 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.59s 2026-03-05 00:58:00.526791 | orchestrator | 2026-03-05 00:58:00 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:58:00.526797 | orchestrator | 2026-03-05 00:58:00 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:03.566263 | orchestrator | 2026-03-05 00:58:03 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:58:03.567149 | orchestrator | 2026-03-05 00:58:03 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:58:03.570086 | orchestrator | 2026-03-05 00:58:03 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:58:03.570138 | orchestrator | 2026-03-05 00:58:03 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:06.615949 | orchestrator | 2026-03-05 00:58:06 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:58:06.617369 | orchestrator | 2026-03-05 00:58:06 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:58:06.619370 | orchestrator | 2026-03-05 00:58:06 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:58:06.619424 | orchestrator | 2026-03-05 00:58:06 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:09.689735 | orchestrator | 2026-03-05 00:58:09 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:58:09.691344 | orchestrator | 2026-03-05 00:58:09 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:58:09.693137 | orchestrator | 2026-03-05 00:58:09 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:58:09.693217 | orchestrator | 2026-03-05 00:58:09 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:12.734696 | orchestrator | 2026-03-05 00:58:12 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:58:12.736176 | orchestrator | 2026-03-05 00:58:12 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:58:12.738302 | orchestrator | 2026-03-05 00:58:12 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:58:12.738360 | orchestrator | 2026-03-05 00:58:12 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:15.770629 | orchestrator | 2026-03-05 00:58:15 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:58:15.771913 | orchestrator | 2026-03-05 00:58:15 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:58:15.773778 | orchestrator | 2026-03-05 00:58:15 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:58:15.774127 | orchestrator | 2026-03-05 00:58:15 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:18.816843 | orchestrator | 2026-03-05 00:58:18 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:58:18.819984 | orchestrator | 2026-03-05 00:58:18 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:58:18.822061 | orchestrator | 2026-03-05 00:58:18 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:58:18.822167 | orchestrator | 2026-03-05 00:58:18 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:21.870348 | orchestrator | 2026-03-05 00:58:21 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:58:21.871434 | orchestrator | 2026-03-05 00:58:21 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:58:21.874510 | orchestrator | 2026-03-05 00:58:21 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:58:21.874606 | orchestrator | 2026-03-05 00:58:21 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:24.919638 | orchestrator | 2026-03-05 00:58:24 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:58:24.921233 | orchestrator | 2026-03-05 00:58:24 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:58:24.931280 | orchestrator | 2026-03-05 00:58:24 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:58:24.931361 | orchestrator | 2026-03-05 00:58:24 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:27.981623 | orchestrator | 2026-03-05 00:58:27 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:58:27.985950 | orchestrator | 2026-03-05 00:58:27 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:58:27.986356 | orchestrator | 2026-03-05 00:58:27 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:58:27.986393 | orchestrator | 2026-03-05 00:58:27 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:31.038830 | orchestrator | 2026-03-05 00:58:31 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:58:31.040438 | orchestrator | 2026-03-05 00:58:31 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:58:31.041793 | orchestrator | 2026-03-05 00:58:31 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:58:31.042295 | orchestrator | 2026-03-05 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:34.082687 | orchestrator | 2026-03-05 00:58:34 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:58:34.083499 | orchestrator | 2026-03-05 00:58:34 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:58:34.085812 | orchestrator | 2026-03-05 00:58:34 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:58:34.085904 | orchestrator | 2026-03-05 00:58:34 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:37.127743 | orchestrator | 2026-03-05 00:58:37 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:58:37.128791 | orchestrator | 2026-03-05 00:58:37 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:58:37.130390 | orchestrator | 2026-03-05 00:58:37 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:58:37.130438 | orchestrator | 2026-03-05 00:58:37 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:40.179568 | orchestrator | 2026-03-05 00:58:40 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:58:40.181676 | orchestrator | 2026-03-05 00:58:40 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:58:40.184494 | orchestrator | 2026-03-05 00:58:40 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:58:40.184554 | orchestrator | 2026-03-05 00:58:40 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:43.232284 | orchestrator | 2026-03-05 00:58:43 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:58:43.236280 | orchestrator | 2026-03-05 00:58:43 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:58:43.239233 | orchestrator | 2026-03-05 00:58:43 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:58:43.239284 | orchestrator | 2026-03-05 00:58:43 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:46.303320 | orchestrator | 2026-03-05 00:58:46 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:58:46.304126 | orchestrator | 2026-03-05 00:58:46 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:58:46.306487 | orchestrator | 2026-03-05 00:58:46 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:58:46.306570 | orchestrator | 2026-03-05 00:58:46 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:49.357896 | orchestrator | 2026-03-05 00:58:49 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:58:49.358806 | orchestrator | 2026-03-05 00:58:49 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:58:49.360033 | orchestrator | 2026-03-05 00:58:49 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:58:49.360063 | orchestrator | 2026-03-05 00:58:49 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:52.402087 | orchestrator | 2026-03-05 00:58:52 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:58:52.403414 | orchestrator | 2026-03-05 00:58:52 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:58:52.404866 | orchestrator | 2026-03-05 00:58:52 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:58:52.404931 | orchestrator | 2026-03-05 00:58:52 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:55.441793 | orchestrator | 2026-03-05 00:58:55 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:58:55.443596 | orchestrator | 2026-03-05 00:58:55 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:58:55.444367 | orchestrator | 2026-03-05 00:58:55 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:58:55.444401 | orchestrator | 2026-03-05 00:58:55 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:58.487743 | orchestrator | 2026-03-05 00:58:58 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:58:58.490679 | orchestrator | 2026-03-05 00:58:58 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:58:58.492685 | orchestrator | 2026-03-05 00:58:58 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:58:58.492717 | orchestrator | 2026-03-05 00:58:58 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:01.524652 | orchestrator | 2026-03-05 00:59:01 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state STARTED 2026-03-05 00:59:01.526094 | orchestrator | 2026-03-05 00:59:01 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:59:01.526918 | orchestrator | 2026-03-05 00:59:01 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:59:01.526956 | orchestrator | 2026-03-05 00:59:01 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:04.558496 | orchestrator | 2026-03-05 00:59:04 | INFO  | Task 9992cc8e-05e2-4dcb-83af-88c297892dd7 is in state SUCCESS 2026-03-05 00:59:04.559346 | orchestrator | 2026-03-05 00:59:04.559376 | orchestrator | 2026-03-05 00:59:04.559384 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 00:59:04.559391 | orchestrator | 2026-03-05 00:59:04.559398 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 00:59:04.559406 | orchestrator | Thursday 05 March 2026 00:55:59 +0000 (0:00:00.340) 0:00:00.340 ******** 2026-03-05 00:59:04.559413 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:59:04.559420 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:59:04.559427 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:59:04.559477 | orchestrator | 2026-03-05 00:59:04.559486 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 00:59:04.559493 | orchestrator | Thursday 05 March 2026 00:55:59 +0000 (0:00:00.337) 0:00:00.677 ******** 2026-03-05 00:59:04.559579 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-05 00:59:04.559587 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-05 00:59:04.559594 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-05 00:59:04.559600 | orchestrator | 2026-03-05 00:59:04.559607 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-05 00:59:04.559614 | orchestrator | 2026-03-05 00:59:04.559620 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-05 00:59:04.559627 | orchestrator | Thursday 05 March 2026 00:55:59 +0000 (0:00:00.562) 0:00:01.240 ******** 2026-03-05 00:59:04.559642 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:59:04.559650 | orchestrator | 2026-03-05 00:59:04.559656 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-05 00:59:04.559663 | orchestrator | Thursday 05 March 2026 00:56:00 +0000 (0:00:00.572) 0:00:01.812 ******** 2026-03-05 00:59:04.559670 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-05 00:59:04.559676 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-05 00:59:04.559683 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-05 00:59:04.559689 | orchestrator | 2026-03-05 00:59:04.559695 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-05 00:59:04.559702 | orchestrator | Thursday 05 March 2026 00:56:01 +0000 (0:00:00.807) 0:00:02.620 ******** 2026-03-05 00:59:04.559711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 00:59:04.559720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 00:59:04.559735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 00:59:04.559753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 00:59:04.559761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 00:59:04.559769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 00:59:04.559776 | orchestrator | 2026-03-05 00:59:04.559782 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-05 00:59:04.559789 | orchestrator | Thursday 05 March 2026 00:56:03 +0000 (0:00:01.823) 0:00:04.444 ******** 2026-03-05 00:59:04.559795 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:59:04.559806 | orchestrator | 2026-03-05 00:59:04.559812 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-05 00:59:04.559818 | orchestrator | Thursday 05 March 2026 00:56:03 +0000 (0:00:00.512) 0:00:04.956 ******** 2026-03-05 00:59:04.559832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 00:59:04.559842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 00:59:04.559849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 00:59:04.559856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 00:59:04.559867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 00:59:04.559881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 00:59:04.559888 | orchestrator | 2026-03-05 00:59:04.559894 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-05 00:59:04.559901 | orchestrator | Thursday 05 March 2026 00:56:06 +0000 (0:00:02.511) 0:00:07.467 ******** 2026-03-05 00:59:04.559908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-05 00:59:04.559915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-05 00:59:04.559926 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:59:04.559933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-05 00:59:04.559944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-05 00:59:04.559955 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:59:04.559961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-05 00:59:04.560013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-05 00:59:04.560024 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:59:04.560031 | orchestrator | 2026-03-05 00:59:04.560037 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-05 00:59:04.560043 | orchestrator | Thursday 05 March 2026 00:56:07 +0000 (0:00:01.085) 0:00:08.553 ******** 2026-03-05 00:59:04.560050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-05 00:59:04.560062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-05 00:59:04.560068 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:59:04.560078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-05 00:59:04.560085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-05 00:59:04.560095 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:59:04.560118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-05 00:59:04.560131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-05 00:59:04.560138 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:59:04.560144 | orchestrator | 2026-03-05 00:59:04.560151 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-05 00:59:04.560159 | orchestrator | Thursday 05 March 2026 00:56:08 +0000 (0:00:00.877) 0:00:09.430 ******** 2026-03-05 00:59:04.560169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 00:59:04.560176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 00:59:04.560195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 00:59:04.560207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 00:59:04.560217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 00:59:04.560225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 00:59:04.560236 | orchestrator | 2026-03-05 00:59:04.560242 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-05 00:59:04.560249 | orchestrator | Thursday 05 March 2026 00:56:10 +0000 (0:00:02.363) 0:00:11.794 ******** 2026-03-05 00:59:04.560256 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:59:04.560263 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:59:04.560270 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:59:04.560276 | orchestrator | 2026-03-05 00:59:04.560283 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-05 00:59:04.560289 | orchestrator | Thursday 05 March 2026 00:56:13 +0000 (0:00:02.857) 0:00:14.651 ******** 2026-03-05 00:59:04.560296 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:59:04.560303 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:59:04.560310 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:59:04.560316 | orchestrator | 2026-03-05 00:59:04.560323 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-05 00:59:04.560329 | orchestrator | Thursday 05 March 2026 00:56:15 +0000 (0:00:01.723) 0:00:16.375 ******** 2026-03-05 00:59:04.560336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 00:59:04.560347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 00:59:04.560358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 00:59:04.560365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 00:59:04.560377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 00:59:04.560388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 00:59:04.560396 | orchestrator | 2026-03-05 00:59:04.560402 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-05 00:59:04.560409 | orchestrator | Thursday 05 March 2026 00:56:17 +0000 (0:00:01.934) 0:00:18.309 ******** 2026-03-05 00:59:04.560416 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:59:04.560423 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:59:04.560430 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:59:04.560437 | orchestrator | 2026-03-05 00:59:04.560443 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-05 00:59:04.560454 | orchestrator | Thursday 05 March 2026 00:56:17 +0000 (0:00:00.290) 0:00:18.600 ******** 2026-03-05 00:59:04.560461 | orchestrator | 2026-03-05 00:59:04.560467 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-05 00:59:04.560474 | orchestrator | Thursday 05 March 2026 00:56:17 +0000 (0:00:00.067) 0:00:18.668 ******** 2026-03-05 00:59:04.560481 | orchestrator | 2026-03-05 00:59:04.560487 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-05 00:59:04.560498 | orchestrator | Thursday 05 March 2026 00:56:17 +0000 (0:00:00.060) 0:00:18.728 ******** 2026-03-05 00:59:04.560505 | orchestrator | 2026-03-05 00:59:04.560511 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-05 00:59:04.560518 | orchestrator | Thursday 05 March 2026 00:56:17 +0000 (0:00:00.067) 0:00:18.796 ******** 2026-03-05 00:59:04.560524 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:59:04.560531 | orchestrator | 2026-03-05 00:59:04.560538 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-05 00:59:04.560544 | orchestrator | Thursday 05 March 2026 00:56:17 +0000 (0:00:00.188) 0:00:18.985 ******** 2026-03-05 00:59:04.560551 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:59:04.560557 | orchestrator | 2026-03-05 00:59:04.560563 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-05 00:59:04.560569 | orchestrator | Thursday 05 March 2026 00:56:18 +0000 (0:00:00.525) 0:00:19.511 ******** 2026-03-05 00:59:04.560575 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:59:04.560581 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:59:04.560587 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:59:04.560593 | orchestrator | 2026-03-05 00:59:04.560600 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-05 00:59:04.560607 | orchestrator | Thursday 05 March 2026 00:57:24 +0000 (0:01:06.395) 0:01:25.906 ******** 2026-03-05 00:59:04.560613 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:59:04.560620 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:59:04.560627 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:59:04.560633 | orchestrator | 2026-03-05 00:59:04.560640 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-05 00:59:04.560646 | orchestrator | Thursday 05 March 2026 00:58:48 +0000 (0:01:23.752) 0:02:49.659 ******** 2026-03-05 00:59:04.560654 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:59:04.560661 | orchestrator | 2026-03-05 00:59:04.560668 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-05 00:59:04.560675 | orchestrator | Thursday 05 March 2026 00:58:49 +0000 (0:00:00.740) 0:02:50.400 ******** 2026-03-05 00:59:04.560682 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:59:04.560690 | orchestrator | 2026-03-05 00:59:04.560697 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-03-05 00:59:04.560705 | orchestrator | Thursday 05 March 2026 00:58:51 +0000 (0:00:02.726) 0:02:53.127 ******** 2026-03-05 00:59:04.560712 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:59:04.560719 | orchestrator | 2026-03-05 00:59:04.560726 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-05 00:59:04.560733 | orchestrator | Thursday 05 March 2026 00:58:54 +0000 (0:00:02.389) 0:02:55.516 ******** 2026-03-05 00:59:04.560740 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:59:04.560747 | orchestrator | 2026-03-05 00:59:04.560753 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-05 00:59:04.560761 | orchestrator | Thursday 05 March 2026 00:58:56 +0000 (0:00:02.720) 0:02:58.236 ******** 2026-03-05 00:59:04.560768 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:59:04.560774 | orchestrator | 2026-03-05 00:59:04.560782 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-05 00:59:04.560790 | orchestrator | Thursday 05 March 2026 00:58:59 +0000 (0:00:02.754) 0:03:00.991 ******** 2026-03-05 00:59:04.560796 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:59:04.560803 | orchestrator | 2026-03-05 00:59:04.560810 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:59:04.560817 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-05 00:59:04.560824 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-05 00:59:04.560842 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-05 00:59:04.560849 | orchestrator | 2026-03-05 00:59:04.560856 | orchestrator | 2026-03-05 00:59:04.560863 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:59:04.560870 | orchestrator | Thursday 05 March 2026 00:59:02 +0000 (0:00:02.643) 0:03:03.634 ******** 2026-03-05 00:59:04.560876 | orchestrator | =============================================================================== 2026-03-05 00:59:04.560884 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 83.75s 2026-03-05 00:59:04.560891 | orchestrator | opensearch : Restart opensearch container ------------------------------ 66.40s 2026-03-05 00:59:04.560897 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.86s 2026-03-05 00:59:04.560904 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.75s 2026-03-05 00:59:04.560910 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.73s 2026-03-05 00:59:04.560916 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.72s 2026-03-05 00:59:04.560923 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.64s 2026-03-05 00:59:04.560929 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.51s 2026-03-05 00:59:04.560938 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.39s 2026-03-05 00:59:04.560945 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.36s 2026-03-05 00:59:04.560951 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.93s 2026-03-05 00:59:04.560957 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.82s 2026-03-05 00:59:04.560964 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.72s 2026-03-05 00:59:04.560970 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.09s 2026-03-05 00:59:04.560976 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.88s 2026-03-05 00:59:04.560983 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.81s 2026-03-05 00:59:04.560989 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.74s 2026-03-05 00:59:04.560995 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2026-03-05 00:59:04.561001 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2026-03-05 00:59:04.561007 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.53s 2026-03-05 00:59:04.561014 | orchestrator | 2026-03-05 00:59:04 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:59:04.561769 | orchestrator | 2026-03-05 00:59:04 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:59:04.561827 | orchestrator | 2026-03-05 00:59:04 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:07.601538 | orchestrator | 2026-03-05 00:59:07 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:59:07.601629 | orchestrator | 2026-03-05 00:59:07 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:59:07.601637 | orchestrator | 2026-03-05 00:59:07 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:10.645338 | orchestrator | 2026-03-05 00:59:10 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:59:10.646908 | orchestrator | 2026-03-05 00:59:10 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:59:10.646946 | orchestrator | 2026-03-05 00:59:10 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:13.690001 | orchestrator | 2026-03-05 00:59:13 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state STARTED 2026-03-05 00:59:13.693196 | orchestrator | 2026-03-05 00:59:13 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:59:13.693283 | orchestrator | 2026-03-05 00:59:13 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:16.748727 | orchestrator | 2026-03-05 00:59:16 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 00:59:16.754431 | orchestrator | 2026-03-05 00:59:16 | INFO  | Task 86f80eee-c277-41da-b162-68fb5ea68664 is in state SUCCESS 2026-03-05 00:59:16.754529 | orchestrator | 2026-03-05 00:59:16.756821 | orchestrator | 2026-03-05 00:59:16.756968 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-05 00:59:16.756988 | orchestrator | 2026-03-05 00:59:16.757000 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-05 00:59:16.757012 | orchestrator | Thursday 05 March 2026 00:55:58 +0000 (0:00:00.103) 0:00:00.103 ******** 2026-03-05 00:59:16.757024 | orchestrator | ok: [localhost] => { 2026-03-05 00:59:16.757037 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-05 00:59:16.757049 | orchestrator | } 2026-03-05 00:59:16.757060 | orchestrator | 2026-03-05 00:59:16.757071 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-05 00:59:16.757314 | orchestrator | Thursday 05 March 2026 00:55:58 +0000 (0:00:00.049) 0:00:00.153 ******** 2026-03-05 00:59:16.757342 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-05 00:59:16.757363 | orchestrator | ...ignoring 2026-03-05 00:59:16.757382 | orchestrator | 2026-03-05 00:59:16.757400 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-05 00:59:16.757416 | orchestrator | Thursday 05 March 2026 00:56:01 +0000 (0:00:03.213) 0:00:03.366 ******** 2026-03-05 00:59:16.757427 | orchestrator | skipping: [localhost] 2026-03-05 00:59:16.757438 | orchestrator | 2026-03-05 00:59:16.757449 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-05 00:59:16.757460 | orchestrator | Thursday 05 March 2026 00:56:02 +0000 (0:00:00.127) 0:00:03.493 ******** 2026-03-05 00:59:16.757471 | orchestrator | ok: [localhost] 2026-03-05 00:59:16.757482 | orchestrator | 2026-03-05 00:59:16.757493 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 00:59:16.757504 | orchestrator | 2026-03-05 00:59:16.757515 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 00:59:16.757525 | orchestrator | Thursday 05 March 2026 00:56:02 +0000 (0:00:00.180) 0:00:03.674 ******** 2026-03-05 00:59:16.757536 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:59:16.757547 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:59:16.757558 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:59:16.757569 | orchestrator | 2026-03-05 00:59:16.757580 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 00:59:16.757608 | orchestrator | Thursday 05 March 2026 00:56:02 +0000 (0:00:00.341) 0:00:04.015 ******** 2026-03-05 00:59:16.757619 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-05 00:59:16.757631 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-05 00:59:16.757641 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-05 00:59:16.757652 | orchestrator | 2026-03-05 00:59:16.757664 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-05 00:59:16.757675 | orchestrator | 2026-03-05 00:59:16.757686 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-05 00:59:16.757697 | orchestrator | Thursday 05 March 2026 00:56:03 +0000 (0:00:00.521) 0:00:04.537 ******** 2026-03-05 00:59:16.757709 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-05 00:59:16.757743 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-05 00:59:16.757755 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-05 00:59:16.757766 | orchestrator | 2026-03-05 00:59:16.757777 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-05 00:59:16.757787 | orchestrator | Thursday 05 March 2026 00:56:03 +0000 (0:00:00.415) 0:00:04.952 ******** 2026-03-05 00:59:16.757798 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:59:16.757810 | orchestrator | 2026-03-05 00:59:16.757821 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-05 00:59:16.757832 | orchestrator | Thursday 05 March 2026 00:56:04 +0000 (0:00:00.510) 0:00:05.463 ******** 2026-03-05 00:59:16.757866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-05 00:59:16.757889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-05 00:59:16.757911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-05 00:59:16.757925 | orchestrator | 2026-03-05 00:59:16.757947 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-05 00:59:16.757960 | orchestrator | Thursday 05 March 2026 00:56:07 +0000 (0:00:02.968) 0:00:08.432 ******** 2026-03-05 00:59:16.757974 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:59:16.757987 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:59:16.758000 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:59:16.758064 | orchestrator | 2026-03-05 00:59:16.758081 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-05 00:59:16.758094 | orchestrator | Thursday 05 March 2026 00:56:07 +0000 (0:00:00.548) 0:00:08.981 ******** 2026-03-05 00:59:16.758144 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:59:16.758158 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:59:16.758171 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:59:16.758183 | orchestrator | 2026-03-05 00:59:16.758195 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-05 00:59:16.758208 | orchestrator | Thursday 05 March 2026 00:56:09 +0000 (0:00:01.434) 0:00:10.415 ******** 2026-03-05 00:59:16.758228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-05 00:59:16.758260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-05 00:59:16.758282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-05 00:59:16.758304 | orchestrator | 2026-03-05 00:59:16.758315 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-05 00:59:16.758326 | orchestrator | Thursday 05 March 2026 00:56:12 +0000 (0:00:03.970) 0:00:14.385 ******** 2026-03-05 00:59:16.758337 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:59:16.758348 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:59:16.758358 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:59:16.758369 | orchestrator | 2026-03-05 00:59:16.758380 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-05 00:59:16.758391 | orchestrator | Thursday 05 March 2026 00:56:14 +0000 (0:00:01.087) 0:00:15.473 ******** 2026-03-05 00:59:16.758401 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:59:16.758412 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:59:16.758423 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:59:16.758433 | orchestrator | 2026-03-05 00:59:16.758444 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-05 00:59:16.758455 | orchestrator | Thursday 05 March 2026 00:56:18 +0000 (0:00:03.920) 0:00:19.393 ******** 2026-03-05 00:59:16.758466 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:59:16.758477 | orchestrator | 2026-03-05 00:59:16.758488 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-05 00:59:16.758499 | orchestrator | Thursday 05 March 2026 00:56:18 +0000 (0:00:00.535) 0:00:19.929 ******** 2026-03-05 00:59:16.758520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 00:59:16.758539 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:59:16.758556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 00:59:16.758568 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:59:16.758586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 00:59:16.758598 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:59:16.758609 | orchestrator | 2026-03-05 00:59:16.758620 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-05 00:59:16.758638 | orchestrator | Thursday 05 March 2026 00:56:21 +0000 (0:00:03.300) 0:00:23.229 ******** 2026-03-05 00:59:16.758654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 00:59:16.758666 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:59:16.758683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 00:59:16.758695 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:59:16.758712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 00:59:16.758730 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:59:16.758741 | orchestrator | 2026-03-05 00:59:16.758752 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-05 00:59:16.758762 | orchestrator | Thursday 05 March 2026 00:56:26 +0000 (0:00:04.406) 0:00:27.636 ******** 2026-03-05 00:59:16.758774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 00:59:16.758786 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:59:16.758816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 00:59:16.758835 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:59:16.758847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 00:59:16.758859 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:59:16.758869 | orchestrator | 2026-03-05 00:59:16.758880 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-05 00:59:16.758891 | orchestrator | Thursday 05 March 2026 00:56:29 +0000 (0:00:03.540) 0:00:31.176 ******** 2026-03-05 00:59:16.758911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-05 00:59:16.758974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-05 00:59:16.758996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-05 00:59:16.759015 | orchestrator | 2026-03-05 00:59:16.759026 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-05 00:59:16.759037 | orchestrator | Thursday 05 March 2026 00:56:33 +0000 (0:00:03.319) 0:00:34.496 ******** 2026-03-05 00:59:16.759048 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:59:16.759059 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:59:16.759070 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:59:16.759081 | orchestrator | 2026-03-05 00:59:16.759091 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-05 00:59:16.759102 | orchestrator | Thursday 05 March 2026 00:56:34 +0000 (0:00:01.122) 0:00:35.618 ******** 2026-03-05 00:59:16.759138 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:59:16.759162 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:59:16.759186 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:59:16.759204 | orchestrator | 2026-03-05 00:59:16.759250 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-05 00:59:16.759430 | orchestrator | Thursday 05 March 2026 00:56:34 +0000 (0:00:00.440) 0:00:36.059 ******** 2026-03-05 00:59:16.759447 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:59:16.759458 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:59:16.759469 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:59:16.759480 | orchestrator | 2026-03-05 00:59:16.759491 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-05 00:59:16.759502 | orchestrator | Thursday 05 March 2026 00:56:35 +0000 (0:00:00.433) 0:00:36.492 ******** 2026-03-05 00:59:16.759514 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-05 00:59:16.759525 | orchestrator | ...ignoring 2026-03-05 00:59:16.759537 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-05 00:59:16.759548 | orchestrator | ...ignoring 2026-03-05 00:59:16.759559 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-05 00:59:16.759570 | orchestrator | ...ignoring 2026-03-05 00:59:16.759581 | orchestrator | 2026-03-05 00:59:16.759592 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-05 00:59:16.759603 | orchestrator | Thursday 05 March 2026 00:56:46 +0000 (0:00:11.079) 0:00:47.571 ******** 2026-03-05 00:59:16.759613 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:59:16.759624 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:59:16.759635 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:59:16.759645 | orchestrator | 2026-03-05 00:59:16.759656 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-05 00:59:16.759678 | orchestrator | Thursday 05 March 2026 00:56:46 +0000 (0:00:00.518) 0:00:48.090 ******** 2026-03-05 00:59:16.759689 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:59:16.759700 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:59:16.759710 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:59:16.759721 | orchestrator | 2026-03-05 00:59:16.759732 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-05 00:59:16.759743 | orchestrator | Thursday 05 March 2026 00:56:47 +0000 (0:00:00.841) 0:00:48.931 ******** 2026-03-05 00:59:16.759753 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:59:16.759764 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:59:16.759775 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:59:16.759786 | orchestrator | 2026-03-05 00:59:16.759797 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-05 00:59:16.759808 | orchestrator | Thursday 05 March 2026 00:56:48 +0000 (0:00:00.538) 0:00:49.469 ******** 2026-03-05 00:59:16.759818 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:59:16.759829 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:59:16.759840 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:59:16.759850 | orchestrator | 2026-03-05 00:59:16.759861 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-05 00:59:16.759872 | orchestrator | Thursday 05 March 2026 00:56:48 +0000 (0:00:00.583) 0:00:50.053 ******** 2026-03-05 00:59:16.759882 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:59:16.759893 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:59:16.759904 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:59:16.759915 | orchestrator | 2026-03-05 00:59:16.759925 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-05 00:59:16.759936 | orchestrator | Thursday 05 March 2026 00:56:49 +0000 (0:00:00.475) 0:00:50.528 ******** 2026-03-05 00:59:16.759957 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:59:16.759969 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:59:16.759979 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:59:16.760010 | orchestrator | 2026-03-05 00:59:16.760037 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-05 00:59:16.760060 | orchestrator | Thursday 05 March 2026 00:56:49 +0000 (0:00:00.847) 0:00:51.375 ******** 2026-03-05 00:59:16.760071 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:59:16.760081 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:59:16.760092 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-05 00:59:16.760171 | orchestrator | 2026-03-05 00:59:16.760197 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-05 00:59:16.760218 | orchestrator | Thursday 05 March 2026 00:56:50 +0000 (0:00:00.547) 0:00:51.923 ******** 2026-03-05 00:59:16.760234 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:59:16.760245 | orchestrator | 2026-03-05 00:59:16.760255 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-05 00:59:16.760266 | orchestrator | Thursday 05 March 2026 00:57:01 +0000 (0:00:11.377) 0:01:03.300 ******** 2026-03-05 00:59:16.760277 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:59:16.760287 | orchestrator | 2026-03-05 00:59:16.760298 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-05 00:59:16.760309 | orchestrator | Thursday 05 March 2026 00:57:02 +0000 (0:00:00.167) 0:01:03.468 ******** 2026-03-05 00:59:16.760320 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:59:16.760330 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:59:16.760341 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:59:16.760352 | orchestrator | 2026-03-05 00:59:16.760363 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-05 00:59:16.760374 | orchestrator | Thursday 05 March 2026 00:57:03 +0000 (0:00:01.253) 0:01:04.721 ******** 2026-03-05 00:59:16.760385 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:59:16.760395 | orchestrator | 2026-03-05 00:59:16.760406 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-05 00:59:16.760425 | orchestrator | Thursday 05 March 2026 00:57:12 +0000 (0:00:09.377) 0:01:14.099 ******** 2026-03-05 00:59:16.760436 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:59:16.760447 | orchestrator | 2026-03-05 00:59:16.760458 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-05 00:59:16.760476 | orchestrator | Thursday 05 March 2026 00:57:14 +0000 (0:00:01.703) 0:01:15.802 ******** 2026-03-05 00:59:16.760487 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:59:16.760497 | orchestrator | 2026-03-05 00:59:16.760508 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-05 00:59:16.760519 | orchestrator | Thursday 05 March 2026 00:57:17 +0000 (0:00:03.047) 0:01:18.850 ******** 2026-03-05 00:59:16.760531 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:59:16.760541 | orchestrator | 2026-03-05 00:59:16.760552 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-05 00:59:16.760563 | orchestrator | Thursday 05 March 2026 00:57:17 +0000 (0:00:00.144) 0:01:18.995 ******** 2026-03-05 00:59:16.760574 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:59:16.760585 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:59:16.760595 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:59:16.760606 | orchestrator | 2026-03-05 00:59:16.760617 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-05 00:59:16.760628 | orchestrator | Thursday 05 March 2026 00:57:17 +0000 (0:00:00.376) 0:01:19.372 ******** 2026-03-05 00:59:16.760644 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:59:16.760661 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:59:16.760689 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:59:16.760709 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-05 00:59:16.760727 | orchestrator | 2026-03-05 00:59:16.760741 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-05 00:59:16.760756 | orchestrator | skipping: no hosts matched 2026-03-05 00:59:16.760769 | orchestrator | 2026-03-05 00:59:16.760784 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-05 00:59:16.760798 | orchestrator | 2026-03-05 00:59:16.760812 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-05 00:59:16.760827 | orchestrator | Thursday 05 March 2026 00:57:18 +0000 (0:00:00.627) 0:01:19.999 ******** 2026-03-05 00:59:16.760842 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:59:16.760857 | orchestrator | 2026-03-05 00:59:16.760872 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-05 00:59:16.760887 | orchestrator | Thursday 05 March 2026 00:57:43 +0000 (0:00:25.050) 0:01:45.049 ******** 2026-03-05 00:59:16.760902 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:59:16.760918 | orchestrator | 2026-03-05 00:59:16.760933 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-05 00:59:16.760948 | orchestrator | Thursday 05 March 2026 00:57:55 +0000 (0:00:11.568) 0:01:56.618 ******** 2026-03-05 00:59:16.760962 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:59:16.760977 | orchestrator | 2026-03-05 00:59:16.760992 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-05 00:59:16.761008 | orchestrator | 2026-03-05 00:59:16.761021 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-05 00:59:16.761035 | orchestrator | Thursday 05 March 2026 00:57:57 +0000 (0:00:02.467) 0:01:59.086 ******** 2026-03-05 00:59:16.761051 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:59:16.761067 | orchestrator | 2026-03-05 00:59:16.761081 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-05 00:59:16.761095 | orchestrator | Thursday 05 March 2026 00:58:17 +0000 (0:00:19.501) 0:02:18.588 ******** 2026-03-05 00:59:16.761135 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:59:16.761152 | orchestrator | 2026-03-05 00:59:16.761166 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-05 00:59:16.761182 | orchestrator | Thursday 05 March 2026 00:58:33 +0000 (0:00:16.594) 0:02:35.182 ******** 2026-03-05 00:59:16.761211 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:59:16.761227 | orchestrator | 2026-03-05 00:59:16.761243 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-05 00:59:16.761260 | orchestrator | 2026-03-05 00:59:16.761290 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-05 00:59:16.761307 | orchestrator | Thursday 05 March 2026 00:58:36 +0000 (0:00:03.037) 0:02:38.219 ******** 2026-03-05 00:59:16.761324 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:59:16.761339 | orchestrator | 2026-03-05 00:59:16.761355 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-05 00:59:16.761370 | orchestrator | Thursday 05 March 2026 00:58:50 +0000 (0:00:13.784) 0:02:52.004 ******** 2026-03-05 00:59:16.761385 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:59:16.761400 | orchestrator | 2026-03-05 00:59:16.761417 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-05 00:59:16.761434 | orchestrator | Thursday 05 March 2026 00:58:56 +0000 (0:00:05.656) 0:02:57.661 ******** 2026-03-05 00:59:16.761451 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:59:16.761467 | orchestrator | 2026-03-05 00:59:16.761484 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-05 00:59:16.761497 | orchestrator | 2026-03-05 00:59:16.761507 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-05 00:59:16.761523 | orchestrator | Thursday 05 March 2026 00:58:59 +0000 (0:00:02.963) 0:03:00.625 ******** 2026-03-05 00:59:16.761746 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:59:16.761766 | orchestrator | 2026-03-05 00:59:16.761776 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-05 00:59:16.761786 | orchestrator | Thursday 05 March 2026 00:58:59 +0000 (0:00:00.579) 0:03:01.205 ******** 2026-03-05 00:59:16.761796 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:59:16.761806 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:59:16.761816 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:59:16.761826 | orchestrator | 2026-03-05 00:59:16.761836 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-05 00:59:16.761845 | orchestrator | Thursday 05 March 2026 00:59:02 +0000 (0:00:02.568) 0:03:03.773 ******** 2026-03-05 00:59:16.761855 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:59:16.761865 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:59:16.761874 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:59:16.761884 | orchestrator | 2026-03-05 00:59:16.761894 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-05 00:59:16.761913 | orchestrator | Thursday 05 March 2026 00:59:04 +0000 (0:00:02.484) 0:03:06.258 ******** 2026-03-05 00:59:16.761923 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:59:16.761932 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:59:16.761942 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:59:16.761952 | orchestrator | 2026-03-05 00:59:16.761961 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-05 00:59:16.761971 | orchestrator | Thursday 05 March 2026 00:59:07 +0000 (0:00:02.485) 0:03:08.743 ******** 2026-03-05 00:59:16.761981 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:59:16.761990 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:59:16.762000 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:59:16.762010 | orchestrator | 2026-03-05 00:59:16.762051 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-05 00:59:16.762063 | orchestrator | Thursday 05 March 2026 00:59:09 +0000 (0:00:02.417) 0:03:11.161 ******** 2026-03-05 00:59:16.762079 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:59:16.762095 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:59:16.762181 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:59:16.762196 | orchestrator | 2026-03-05 00:59:16.762212 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-05 00:59:16.762228 | orchestrator | Thursday 05 March 2026 00:59:13 +0000 (0:00:03.434) 0:03:14.595 ******** 2026-03-05 00:59:16.762261 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:59:16.762277 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:59:16.762292 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:59:16.762306 | orchestrator | 2026-03-05 00:59:16.762323 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:59:16.762341 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-05 00:59:16.762360 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-05 00:59:16.762378 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-05 00:59:16.762396 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-05 00:59:16.762412 | orchestrator | 2026-03-05 00:59:16.762428 | orchestrator | 2026-03-05 00:59:16.762442 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:59:16.762458 | orchestrator | Thursday 05 March 2026 00:59:13 +0000 (0:00:00.257) 0:03:14.853 ******** 2026-03-05 00:59:16.762473 | orchestrator | =============================================================================== 2026-03-05 00:59:16.762488 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 44.55s 2026-03-05 00:59:16.762505 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 28.16s 2026-03-05 00:59:16.762520 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 13.79s 2026-03-05 00:59:16.762537 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 11.38s 2026-03-05 00:59:16.762553 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.08s 2026-03-05 00:59:16.762569 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 9.38s 2026-03-05 00:59:16.762596 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.66s 2026-03-05 00:59:16.762613 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.51s 2026-03-05 00:59:16.762626 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 4.41s 2026-03-05 00:59:16.762640 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.97s 2026-03-05 00:59:16.762653 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.92s 2026-03-05 00:59:16.762667 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.54s 2026-03-05 00:59:16.762680 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.43s 2026-03-05 00:59:16.762693 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.32s 2026-03-05 00:59:16.762703 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.30s 2026-03-05 00:59:16.762711 | orchestrator | Check MariaDB service --------------------------------------------------- 3.21s 2026-03-05 00:59:16.762719 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 3.05s 2026-03-05 00:59:16.762726 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.97s 2026-03-05 00:59:16.762734 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.96s 2026-03-05 00:59:16.762742 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.57s 2026-03-05 00:59:16.762750 | orchestrator | 2026-03-05 00:59:16 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 00:59:16.762758 | orchestrator | 2026-03-05 00:59:16 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:59:16.762775 | orchestrator | 2026-03-05 00:59:16 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:19.807103 | orchestrator | 2026-03-05 00:59:19 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 00:59:19.808615 | orchestrator | 2026-03-05 00:59:19 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 00:59:19.810273 | orchestrator | 2026-03-05 00:59:19 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:59:19.810430 | orchestrator | 2026-03-05 00:59:19 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:22.853698 | orchestrator | 2026-03-05 00:59:22 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 00:59:22.856665 | orchestrator | 2026-03-05 00:59:22 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 00:59:22.859058 | orchestrator | 2026-03-05 00:59:22 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:59:22.859188 | orchestrator | 2026-03-05 00:59:22 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:25.902840 | orchestrator | 2026-03-05 00:59:25 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 00:59:25.906428 | orchestrator | 2026-03-05 00:59:25 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 00:59:25.908169 | orchestrator | 2026-03-05 00:59:25 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:59:25.908212 | orchestrator | 2026-03-05 00:59:25 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:28.950143 | orchestrator | 2026-03-05 00:59:28 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 00:59:28.952010 | orchestrator | 2026-03-05 00:59:28 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 00:59:28.953295 | orchestrator | 2026-03-05 00:59:28 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:59:28.953332 | orchestrator | 2026-03-05 00:59:28 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:31.997861 | orchestrator | 2026-03-05 00:59:31 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 00:59:32.000205 | orchestrator | 2026-03-05 00:59:31 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 00:59:32.002299 | orchestrator | 2026-03-05 00:59:32 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:59:32.002589 | orchestrator | 2026-03-05 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:35.060980 | orchestrator | 2026-03-05 00:59:35 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 00:59:35.061049 | orchestrator | 2026-03-05 00:59:35 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 00:59:35.063620 | orchestrator | 2026-03-05 00:59:35 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:59:35.063640 | orchestrator | 2026-03-05 00:59:35 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:38.110003 | orchestrator | 2026-03-05 00:59:38 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 00:59:38.110782 | orchestrator | 2026-03-05 00:59:38 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 00:59:38.112042 | orchestrator | 2026-03-05 00:59:38 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:59:38.112088 | orchestrator | 2026-03-05 00:59:38 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:41.154873 | orchestrator | 2026-03-05 00:59:41 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 00:59:41.158848 | orchestrator | 2026-03-05 00:59:41 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 00:59:41.161152 | orchestrator | 2026-03-05 00:59:41 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:59:41.161208 | orchestrator | 2026-03-05 00:59:41 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:44.195691 | orchestrator | 2026-03-05 00:59:44 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 00:59:44.197547 | orchestrator | 2026-03-05 00:59:44 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 00:59:44.197936 | orchestrator | 2026-03-05 00:59:44 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:59:44.197981 | orchestrator | 2026-03-05 00:59:44 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:47.231543 | orchestrator | 2026-03-05 00:59:47 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 00:59:47.235796 | orchestrator | 2026-03-05 00:59:47 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 00:59:47.236891 | orchestrator | 2026-03-05 00:59:47 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:59:47.236929 | orchestrator | 2026-03-05 00:59:47 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:50.278916 | orchestrator | 2026-03-05 00:59:50 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 00:59:50.284595 | orchestrator | 2026-03-05 00:59:50 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 00:59:50.291971 | orchestrator | 2026-03-05 00:59:50 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:59:50.292068 | orchestrator | 2026-03-05 00:59:50 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:53.345205 | orchestrator | 2026-03-05 00:59:53 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 00:59:53.346058 | orchestrator | 2026-03-05 00:59:53 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 00:59:53.346658 | orchestrator | 2026-03-05 00:59:53 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:59:53.346703 | orchestrator | 2026-03-05 00:59:53 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:56.390525 | orchestrator | 2026-03-05 00:59:56 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 00:59:56.392092 | orchestrator | 2026-03-05 00:59:56 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 00:59:56.394200 | orchestrator | 2026-03-05 00:59:56 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:59:56.394250 | orchestrator | 2026-03-05 00:59:56 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:59.425343 | orchestrator | 2026-03-05 00:59:59 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 00:59:59.426284 | orchestrator | 2026-03-05 00:59:59 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 00:59:59.426987 | orchestrator | 2026-03-05 00:59:59 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 00:59:59.427251 | orchestrator | 2026-03-05 00:59:59 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:02.471037 | orchestrator | 2026-03-05 01:00:02 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:00:02.472273 | orchestrator | 2026-03-05 01:00:02 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:00:02.473314 | orchestrator | 2026-03-05 01:00:02 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 01:00:02.473359 | orchestrator | 2026-03-05 01:00:02 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:05.512265 | orchestrator | 2026-03-05 01:00:05 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:00:05.513177 | orchestrator | 2026-03-05 01:00:05 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:00:05.514709 | orchestrator | 2026-03-05 01:00:05 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 01:00:05.514774 | orchestrator | 2026-03-05 01:00:05 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:08.543639 | orchestrator | 2026-03-05 01:00:08 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:00:08.544594 | orchestrator | 2026-03-05 01:00:08 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:00:08.545934 | orchestrator | 2026-03-05 01:00:08 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 01:00:08.545980 | orchestrator | 2026-03-05 01:00:08 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:11.579094 | orchestrator | 2026-03-05 01:00:11 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:00:11.581707 | orchestrator | 2026-03-05 01:00:11 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:00:11.584479 | orchestrator | 2026-03-05 01:00:11 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 01:00:11.585056 | orchestrator | 2026-03-05 01:00:11 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:14.623545 | orchestrator | 2026-03-05 01:00:14 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:00:14.624712 | orchestrator | 2026-03-05 01:00:14 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:00:14.626803 | orchestrator | 2026-03-05 01:00:14 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 01:00:14.626846 | orchestrator | 2026-03-05 01:00:14 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:17.678549 | orchestrator | 2026-03-05 01:00:17 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:00:17.681541 | orchestrator | 2026-03-05 01:00:17 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:00:17.683497 | orchestrator | 2026-03-05 01:00:17 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 01:00:17.683551 | orchestrator | 2026-03-05 01:00:17 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:20.730557 | orchestrator | 2026-03-05 01:00:20 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:00:20.730900 | orchestrator | 2026-03-05 01:00:20 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:00:20.731843 | orchestrator | 2026-03-05 01:00:20 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state STARTED 2026-03-05 01:00:20.731889 | orchestrator | 2026-03-05 01:00:20 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:23.765993 | orchestrator | 2026-03-05 01:00:23 | INFO  | Task f4bf84bd-7ada-4ae7-8290-b64245076069 is in state STARTED 2026-03-05 01:00:23.766931 | orchestrator | 2026-03-05 01:00:23 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:00:23.768649 | orchestrator | 2026-03-05 01:00:23 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:00:23.774079 | orchestrator | 2026-03-05 01:00:23.774346 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-05 01:00:23.774363 | orchestrator | 2.16.14 2026-03-05 01:00:23.774374 | orchestrator | 2026-03-05 01:00:23.774383 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-05 01:00:23.774392 | orchestrator | 2026-03-05 01:00:23.774401 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-05 01:00:23.774652 | orchestrator | Thursday 05 March 2026 00:58:02 +0000 (0:00:00.764) 0:00:00.764 ******** 2026-03-05 01:00:23.774667 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:23.774683 | orchestrator | 2026-03-05 01:00:23.774705 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-05 01:00:23.774724 | orchestrator | Thursday 05 March 2026 00:58:02 +0000 (0:00:00.671) 0:00:01.435 ******** 2026-03-05 01:00:23.774740 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:23.774756 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:23.774772 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:23.774789 | orchestrator | 2026-03-05 01:00:23.774807 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-05 01:00:23.775005 | orchestrator | Thursday 05 March 2026 00:58:03 +0000 (0:00:00.705) 0:00:02.141 ******** 2026-03-05 01:00:23.775018 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:23.775027 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:23.775035 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:23.775044 | orchestrator | 2026-03-05 01:00:23.775053 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-05 01:00:23.775062 | orchestrator | Thursday 05 March 2026 00:58:04 +0000 (0:00:00.464) 0:00:02.606 ******** 2026-03-05 01:00:23.775070 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:23.775079 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:23.775088 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:23.775096 | orchestrator | 2026-03-05 01:00:23.775105 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-05 01:00:23.775113 | orchestrator | Thursday 05 March 2026 00:58:04 +0000 (0:00:00.894) 0:00:03.501 ******** 2026-03-05 01:00:23.775122 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:23.775154 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:23.775165 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:23.775174 | orchestrator | 2026-03-05 01:00:23.775183 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-05 01:00:23.775191 | orchestrator | Thursday 05 March 2026 00:58:05 +0000 (0:00:00.359) 0:00:03.860 ******** 2026-03-05 01:00:23.775200 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:23.775208 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:23.775217 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:23.775226 | orchestrator | 2026-03-05 01:00:23.775234 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-05 01:00:23.775243 | orchestrator | Thursday 05 March 2026 00:58:05 +0000 (0:00:00.361) 0:00:04.221 ******** 2026-03-05 01:00:23.775252 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:23.775260 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:23.775269 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:23.775277 | orchestrator | 2026-03-05 01:00:23.775286 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-05 01:00:23.775295 | orchestrator | Thursday 05 March 2026 00:58:06 +0000 (0:00:00.348) 0:00:04.570 ******** 2026-03-05 01:00:23.775304 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.775313 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:23.775321 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:23.775330 | orchestrator | 2026-03-05 01:00:23.775356 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-05 01:00:23.775365 | orchestrator | Thursday 05 March 2026 00:58:06 +0000 (0:00:00.589) 0:00:05.160 ******** 2026-03-05 01:00:23.775374 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:23.775382 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:23.775391 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:23.775399 | orchestrator | 2026-03-05 01:00:23.775418 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-05 01:00:23.775427 | orchestrator | Thursday 05 March 2026 00:58:06 +0000 (0:00:00.303) 0:00:05.463 ******** 2026-03-05 01:00:23.775436 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-05 01:00:23.775445 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-05 01:00:23.775454 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-05 01:00:23.775462 | orchestrator | 2026-03-05 01:00:23.775471 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-05 01:00:23.775480 | orchestrator | Thursday 05 March 2026 00:58:07 +0000 (0:00:00.787) 0:00:06.250 ******** 2026-03-05 01:00:23.775488 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:23.775497 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:23.775505 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:23.775514 | orchestrator | 2026-03-05 01:00:23.775523 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-05 01:00:23.775532 | orchestrator | Thursday 05 March 2026 00:58:08 +0000 (0:00:00.516) 0:00:06.767 ******** 2026-03-05 01:00:23.775541 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-05 01:00:23.775549 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-05 01:00:23.775558 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-05 01:00:23.775566 | orchestrator | 2026-03-05 01:00:23.775575 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-05 01:00:23.775583 | orchestrator | Thursday 05 March 2026 00:58:10 +0000 (0:00:02.359) 0:00:09.127 ******** 2026-03-05 01:00:23.775592 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-05 01:00:23.775601 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-05 01:00:23.775610 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-05 01:00:23.775618 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.775627 | orchestrator | 2026-03-05 01:00:23.775684 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-05 01:00:23.775702 | orchestrator | Thursday 05 March 2026 00:58:11 +0000 (0:00:00.841) 0:00:09.968 ******** 2026-03-05 01:00:23.775718 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.775738 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.775754 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.775768 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.775779 | orchestrator | 2026-03-05 01:00:23.775789 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-05 01:00:23.775799 | orchestrator | Thursday 05 March 2026 00:58:12 +0000 (0:00:00.896) 0:00:10.864 ******** 2026-03-05 01:00:23.775812 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.775833 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.775844 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.775855 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.775865 | orchestrator | 2026-03-05 01:00:23.775875 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-05 01:00:23.775886 | orchestrator | Thursday 05 March 2026 00:58:12 +0000 (0:00:00.390) 0:00:11.254 ******** 2026-03-05 01:00:23.775903 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '28c3a2458f96', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-05 00:58:08.952626', 'end': '2026-03-05 00:58:08.996495', 'delta': '0:00:00.043869', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['28c3a2458f96'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-05 01:00:23.775917 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '8a07f1e052cd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-05 00:58:09.871697', 'end': '2026-03-05 00:58:09.912800', 'delta': '0:00:00.041103', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8a07f1e052cd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-05 01:00:23.775962 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '62323de01373', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-05 00:58:10.427177', 'end': '2026-03-05 00:58:10.474973', 'delta': '0:00:00.047796', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['62323de01373'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-05 01:00:23.775974 | orchestrator | 2026-03-05 01:00:23.775984 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-05 01:00:23.776001 | orchestrator | Thursday 05 March 2026 00:58:12 +0000 (0:00:00.216) 0:00:11.471 ******** 2026-03-05 01:00:23.776011 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:23.776020 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:23.776028 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:23.776037 | orchestrator | 2026-03-05 01:00:23.776045 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-05 01:00:23.776054 | orchestrator | Thursday 05 March 2026 00:58:13 +0000 (0:00:00.520) 0:00:11.991 ******** 2026-03-05 01:00:23.776063 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-05 01:00:23.776071 | orchestrator | 2026-03-05 01:00:23.776080 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-05 01:00:23.776088 | orchestrator | Thursday 05 March 2026 00:58:15 +0000 (0:00:02.184) 0:00:14.176 ******** 2026-03-05 01:00:23.776097 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.776105 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:23.776114 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:23.776122 | orchestrator | 2026-03-05 01:00:23.776206 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-05 01:00:23.776217 | orchestrator | Thursday 05 March 2026 00:58:15 +0000 (0:00:00.333) 0:00:14.509 ******** 2026-03-05 01:00:23.776225 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.776234 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:23.776243 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:23.776251 | orchestrator | 2026-03-05 01:00:23.776260 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-05 01:00:23.776268 | orchestrator | Thursday 05 March 2026 00:58:16 +0000 (0:00:00.476) 0:00:14.986 ******** 2026-03-05 01:00:23.776277 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.776291 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:23.776311 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:23.776329 | orchestrator | 2026-03-05 01:00:23.776344 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-05 01:00:23.776358 | orchestrator | Thursday 05 March 2026 00:58:17 +0000 (0:00:00.576) 0:00:15.562 ******** 2026-03-05 01:00:23.776373 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:23.776386 | orchestrator | 2026-03-05 01:00:23.776401 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-05 01:00:23.776416 | orchestrator | Thursday 05 March 2026 00:58:17 +0000 (0:00:00.136) 0:00:15.698 ******** 2026-03-05 01:00:23.776431 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.776446 | orchestrator | 2026-03-05 01:00:23.776461 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-05 01:00:23.776485 | orchestrator | Thursday 05 March 2026 00:58:17 +0000 (0:00:00.257) 0:00:15.956 ******** 2026-03-05 01:00:23.776497 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.776506 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:23.776514 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:23.776523 | orchestrator | 2026-03-05 01:00:23.776531 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-05 01:00:23.776540 | orchestrator | Thursday 05 March 2026 00:58:17 +0000 (0:00:00.306) 0:00:16.263 ******** 2026-03-05 01:00:23.776549 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.776557 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:23.776566 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:23.776574 | orchestrator | 2026-03-05 01:00:23.776583 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-05 01:00:23.776592 | orchestrator | Thursday 05 March 2026 00:58:18 +0000 (0:00:00.381) 0:00:16.644 ******** 2026-03-05 01:00:23.776601 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.776609 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:23.776618 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:23.776626 | orchestrator | 2026-03-05 01:00:23.776635 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-05 01:00:23.776652 | orchestrator | Thursday 05 March 2026 00:58:18 +0000 (0:00:00.605) 0:00:17.249 ******** 2026-03-05 01:00:23.776662 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.776677 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:23.776701 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:23.776716 | orchestrator | 2026-03-05 01:00:23.776732 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-05 01:00:23.776747 | orchestrator | Thursday 05 March 2026 00:58:19 +0000 (0:00:00.365) 0:00:17.614 ******** 2026-03-05 01:00:23.776762 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.776776 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:23.776785 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:23.776793 | orchestrator | 2026-03-05 01:00:23.776802 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-05 01:00:23.776811 | orchestrator | Thursday 05 March 2026 00:58:19 +0000 (0:00:00.341) 0:00:17.956 ******** 2026-03-05 01:00:23.776819 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.776828 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:23.776837 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:23.776887 | orchestrator | 2026-03-05 01:00:23.776897 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-05 01:00:23.776906 | orchestrator | Thursday 05 March 2026 00:58:19 +0000 (0:00:00.357) 0:00:18.314 ******** 2026-03-05 01:00:23.776915 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.776923 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:23.776932 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:23.776941 | orchestrator | 2026-03-05 01:00:23.776949 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-05 01:00:23.776958 | orchestrator | Thursday 05 March 2026 00:58:20 +0000 (0:00:00.602) 0:00:18.916 ******** 2026-03-05 01:00:23.776968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f88409fd--5147--5194--8288--2488b5e44352-osd--block--f88409fd--5147--5194--8288--2488b5e44352', 'dm-uuid-LVM-TF2aYQ1gcI3opAwWGnpIMDJl6d8DlJBZKYpryDGlNdcI2vO1IQcI176nGOGfrZZB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.776979 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9d6733ad--9ad8--5bce--b749--e645aedee181-osd--block--9d6733ad--9ad8--5bce--b749--e645aedee181', 'dm-uuid-LVM-UEVEB0cZjoklxsfZk5hz7YwDzzENqXERbVNoNDV9w1eHnvNFLMJYbXXgxazLyb4w'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.776989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.776998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777020 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777030 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777039 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777074 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777094 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--130794de--baff--5f0b--9c30--9a8206b73831-osd--block--130794de--baff--5f0b--9c30--9a8206b73831', 'dm-uuid-LVM-6SxwFyILndwXvKVHabqVnqJVbiSceNTQI62kIoEWE4ddPGfqexPf4TEVW3OPAMve'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777103 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--54671a7c--dad9--563e--9508--4448c9acfc6a-osd--block--54671a7c--dad9--563e--9508--4448c9acfc6a', 'dm-uuid-LVM-GzgvoDDFvX2TwyrNJxloOIgXhzHvcOX3dh3GYtgbr1lY7Iy9wJxSNzOE1zAHceVu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777170 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1', 'scsi-SQEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part1', 'scsi-SQEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part14', 'scsi-SQEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part15', 'scsi-SQEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part16', 'scsi-SQEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:23.777191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f88409fd--5147--5194--8288--2488b5e44352-osd--block--f88409fd--5147--5194--8288--2488b5e44352'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jdWfEB-N83k-MDWn-BOLC-ihm4-IydT-Dpp4Ol', 'scsi-0QEMU_QEMU_HARDDISK_46af06ac-e806-45b3-baa6-786374d24d95', 'scsi-SQEMU_QEMU_HARDDISK_46af06ac-e806-45b3-baa6-786374d24d95'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:23.777210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9d6733ad--9ad8--5bce--b749--e645aedee181-osd--block--9d6733ad--9ad8--5bce--b749--e645aedee181'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V7ZD6i-hIWS-JtXW-HcWn-0dcX-ecnk-fIwTEz', 'scsi-0QEMU_QEMU_HARDDISK_94265553-26b7-47c9-a922-5463d2be5f34', 'scsi-SQEMU_QEMU_HARDDISK_94265553-26b7-47c9-a922-5463d2be5f34'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:23.777274 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_db99048b-c1ef-4f9e-82d3-cd84d3f63e80', 'scsi-SQEMU_QEMU_HARDDISK_db99048b-c1ef-4f9e-82d3-cd84d3f63e80'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:23.777306 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:23.777372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777390 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777414 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.777467 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f', 'scsi-SQEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part1', 'scsi-SQEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part14', 'scsi-SQEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part15', 'scsi-SQEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part16', 'scsi-SQEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:23.777492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--130794de--baff--5f0b--9c30--9a8206b73831-osd--block--130794de--baff--5f0b--9c30--9a8206b73831'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zxJquG-kgIY-dbro-xDa2-2Hhj-fSLP-y9EZ7f', 'scsi-0QEMU_QEMU_HARDDISK_fde3cda2-3067-4d86-95c6-d39f62804520', 'scsi-SQEMU_QEMU_HARDDISK_fde3cda2-3067-4d86-95c6-d39f62804520'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:23.777509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--54671a7c--dad9--563e--9508--4448c9acfc6a-osd--block--54671a7c--dad9--563e--9508--4448c9acfc6a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Ijjo4n-FAhc-UcPL-RECK-8Umb-4nOw-0gbpuM', 'scsi-0QEMU_QEMU_HARDDISK_5fc6e5d1-feaa-44be-badf-9551630a8ded', 'scsi-SQEMU_QEMU_HARDDISK_5fc6e5d1-feaa-44be-badf-9551630a8ded'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:23.777534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4cc63ee5-51dc-4d14-b9fb-faf031b30aaa', 'scsi-SQEMU_QEMU_HARDDISK_4cc63ee5-51dc-4d14-b9fb-faf031b30aaa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:23.777550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-03-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:23.777559 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:23.777568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7f4ff93a--c4fd--5f9b--af1c--107d8e49bf15-osd--block--7f4ff93a--c4fd--5f9b--af1c--107d8e49bf15', 'dm-uuid-LVM-Pf3XoZa14DA1N8trbcyuXz1HFWwultSjyo0RNgMBhzdapfZ8f9kjAwVQTfyGGbwo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--56dff28b--2239--50bc--bb4f--66f9aa80ba88-osd--block--56dff28b--2239--50bc--bb4f--66f9aa80ba88', 'dm-uuid-LVM-7021JZpIOlxZSNvSousoCRWY6EUPV9VtGoV6JFDR2ugTDoJu1wseGz0A83f6v6Gj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational'2026-03-05 01:00:23 | INFO  | Task 14559a05-a062-4407-9e25-b974b67c1a9d is in state SUCCESS 2026-03-05 01:00:23.777594 | orchestrator | : '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777636 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777645 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777657 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777694 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:23.777728 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68', 'scsi-SQEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part1', 'scsi-SQEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part14', 'scsi-SQEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part15', 'scsi-SQEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part16', 'scsi-SQEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:23.777757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7f4ff93a--c4fd--5f9b--af1c--107d8e49bf15-osd--block--7f4ff93a--c4fd--5f9b--af1c--107d8e49bf15'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FuhEQw-hBkB-kamn-cyjG-liQC-9xZP-ztM27Q', 'scsi-0QEMU_QEMU_HARDDISK_51d29519-c1f9-43c2-8da2-810d6ee2cf1d', 'scsi-SQEMU_QEMU_HARDDISK_51d29519-c1f9-43c2-8da2-810d6ee2cf1d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:23.777777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--56dff28b--2239--50bc--bb4f--66f9aa80ba88-osd--block--56dff28b--2239--50bc--bb4f--66f9aa80ba88'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xeeaVk-tk58-c70M-ecxI-uAuR-vNFi-S3719x', 'scsi-0QEMU_QEMU_HARDDISK_01b35dfc-cc13-430f-9521-065aaefb7085', 'scsi-SQEMU_QEMU_HARDDISK_01b35dfc-cc13-430f-9521-065aaefb7085'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:23.777788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3d47084-7273-4e4c-b048-5cf25f7ffc67', 'scsi-SQEMU_QEMU_HARDDISK_f3d47084-7273-4e4c-b048-5cf25f7ffc67'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:23.777803 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:23.777812 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:23.777821 | orchestrator | 2026-03-05 01:00:23.777830 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-05 01:00:23.777839 | orchestrator | Thursday 05 March 2026 00:58:21 +0000 (0:00:00.678) 0:00:19.595 ******** 2026-03-05 01:00:23.777849 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f88409fd--5147--5194--8288--2488b5e44352-osd--block--f88409fd--5147--5194--8288--2488b5e44352', 'dm-uuid-LVM-TF2aYQ1gcI3opAwWGnpIMDJl6d8DlJBZKYpryDGlNdcI2vO1IQcI176nGOGfrZZB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.777864 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9d6733ad--9ad8--5bce--b749--e645aedee181-osd--block--9d6733ad--9ad8--5bce--b749--e645aedee181', 'dm-uuid-LVM-UEVEB0cZjoklxsfZk5hz7YwDzzENqXERbVNoNDV9w1eHnvNFLMJYbXXgxazLyb4w'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.777877 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.777887 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.777896 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.777910 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.777919 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.777933 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.777943 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.777955 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.777964 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--130794de--baff--5f0b--9c30--9a8206b73831-osd--block--130794de--baff--5f0b--9c30--9a8206b73831', 'dm-uuid-LVM-6SxwFyILndwXvKVHabqVnqJVbiSceNTQI62kIoEWE4ddPGfqexPf4TEVW3OPAMve'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.777979 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--54671a7c--dad9--563e--9508--4448c9acfc6a-osd--block--54671a7c--dad9--563e--9508--4448c9acfc6a', 'dm-uuid-LVM-GzgvoDDFvX2TwyrNJxloOIgXhzHvcOX3dh3GYtgbr1lY7Iy9wJxSNzOE1zAHceVu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.777994 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1', 'scsi-SQEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part1', 'scsi-SQEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part14', 'scsi-SQEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part15', 'scsi-SQEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part16', 'scsi-SQEMU_QEMU_HARDDISK_834c07da-6670-4f26-8062-9b7380900cd1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778054 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f88409fd--5147--5194--8288--2488b5e44352-osd--block--f88409fd--5147--5194--8288--2488b5e44352'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jdWfEB-N83k-MDWn-BOLC-ihm4-IydT-Dpp4Ol', 'scsi-0QEMU_QEMU_HARDDISK_46af06ac-e806-45b3-baa6-786374d24d95', 'scsi-SQEMU_QEMU_HARDDISK_46af06ac-e806-45b3-baa6-786374d24d95'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778074 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778084 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9d6733ad--9ad8--5bce--b749--e645aedee181-osd--block--9d6733ad--9ad8--5bce--b749--e645aedee181'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V7ZD6i-hIWS-JtXW-HcWn-0dcX-ecnk-fIwTEz', 'scsi-0QEMU_QEMU_HARDDISK_94265553-26b7-47c9-a922-5463d2be5f34', 'scsi-SQEMU_QEMU_HARDDISK_94265553-26b7-47c9-a922-5463d2be5f34'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778101 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778114 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_db99048b-c1ef-4f9e-82d3-cd84d3f63e80', 'scsi-SQEMU_QEMU_HARDDISK_db99048b-c1ef-4f9e-82d3-cd84d3f63e80'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778123 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778153 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778168 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778183 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778192 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778201 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778214 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778231 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f', 'scsi-SQEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part1', 'scsi-SQEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part14', 'scsi-SQEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part15', 'scsi-SQEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part16', 'scsi-SQEMU_QEMU_HARDDISK_26e3da3f-ebef-4f2e-987f-6c33458d570f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778247 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--130794de--baff--5f0b--9c30--9a8206b73831-osd--block--130794de--baff--5f0b--9c30--9a8206b73831'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zxJquG-kgIY-dbro-xDa2-2Hhj-fSLP-y9EZ7f', 'scsi-0QEMU_QEMU_HARDDISK_fde3cda2-3067-4d86-95c6-d39f62804520', 'scsi-SQEMU_QEMU_HARDDISK_fde3cda2-3067-4d86-95c6-d39f62804520'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778261 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--54671a7c--dad9--563e--9508--4448c9acfc6a-osd--block--54671a7c--dad9--563e--9508--4448c9acfc6a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Ijjo4n-FAhc-UcPL-RECK-8Umb-4nOw-0gbpuM', 'scsi-0QEMU_QEMU_HARDDISK_5fc6e5d1-feaa-44be-badf-9551630a8ded', 'scsi-SQEMU_QEMU_HARDDISK_5fc6e5d1-feaa-44be-badf-9551630a8ded'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778270 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.778279 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4cc63ee5-51dc-4d14-b9fb-faf031b30aaa', 'scsi-SQEMU_QEMU_HARDDISK_4cc63ee5-51dc-4d14-b9fb-faf031b30aaa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778293 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-03-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778309 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:23.778318 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7f4ff93a--c4fd--5f9b--af1c--107d8e49bf15-osd--block--7f4ff93a--c4fd--5f9b--af1c--107d8e49bf15', 'dm-uuid-LVM-Pf3XoZa14DA1N8trbcyuXz1HFWwultSjyo0RNgMBhzdapfZ8f9kjAwVQTfyGGbwo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778327 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--56dff28b--2239--50bc--bb4f--66f9aa80ba88-osd--block--56dff28b--2239--50bc--bb4f--66f9aa80ba88', 'dm-uuid-LVM-7021JZpIOlxZSNvSousoCRWY6EUPV9VtGoV6JFDR2ugTDoJu1wseGz0A83f6v6Gj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778340 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778349 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778359 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778372 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778386 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778396 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778405 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778417 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778433 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68', 'scsi-SQEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part1', 'scsi-SQEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part14', 'scsi-SQEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part15', 'scsi-SQEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part16', 'scsi-SQEMU_QEMU_HARDDISK_d85b406f-f47a-4803-8455-48f8dde86a68-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778448 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7f4ff93a--c4fd--5f9b--af1c--107d8e49bf15-osd--block--7f4ff93a--c4fd--5f9b--af1c--107d8e49bf15'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FuhEQw-hBkB-kamn-cyjG-liQC-9xZP-ztM27Q', 'scsi-0QEMU_QEMU_HARDDISK_51d29519-c1f9-43c2-8da2-810d6ee2cf1d', 'scsi-SQEMU_QEMU_HARDDISK_51d29519-c1f9-43c2-8da2-810d6ee2cf1d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778461 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--56dff28b--2239--50bc--bb4f--66f9aa80ba88-osd--block--56dff28b--2239--50bc--bb4f--66f9aa80ba88'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xeeaVk-tk58-c70M-ecxI-uAuR-vNFi-S3719x', 'scsi-0QEMU_QEMU_HARDDISK_01b35dfc-cc13-430f-9521-065aaefb7085', 'scsi-SQEMU_QEMU_HARDDISK_01b35dfc-cc13-430f-9521-065aaefb7085'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778471 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3d47084-7273-4e4c-b048-5cf25f7ffc67', 'scsi-SQEMU_QEMU_HARDDISK_f3d47084-7273-4e4c-b048-5cf25f7ffc67'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778490 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:23.778499 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:23.778508 | orchestrator | 2026-03-05 01:00:23.778517 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-05 01:00:23.778526 | orchestrator | Thursday 05 March 2026 00:58:21 +0000 (0:00:00.785) 0:00:20.381 ******** 2026-03-05 01:00:23.778535 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:23.778543 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:23.778552 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:23.778561 | orchestrator | 2026-03-05 01:00:23.778569 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-05 01:00:23.778578 | orchestrator | Thursday 05 March 2026 00:58:22 +0000 (0:00:00.719) 0:00:21.100 ******** 2026-03-05 01:00:23.778587 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:23.778595 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:23.778604 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:23.778612 | orchestrator | 2026-03-05 01:00:23.778621 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-05 01:00:23.778630 | orchestrator | Thursday 05 March 2026 00:58:23 +0000 (0:00:00.582) 0:00:21.682 ******** 2026-03-05 01:00:23.778638 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:23.778647 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:23.778655 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:23.778667 | orchestrator | 2026-03-05 01:00:23.778682 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-05 01:00:23.778697 | orchestrator | Thursday 05 March 2026 00:58:23 +0000 (0:00:00.665) 0:00:22.348 ******** 2026-03-05 01:00:23.778711 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.778727 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:23.778742 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:23.778757 | orchestrator | 2026-03-05 01:00:23.778772 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-05 01:00:23.778782 | orchestrator | Thursday 05 March 2026 00:58:24 +0000 (0:00:00.359) 0:00:22.708 ******** 2026-03-05 01:00:23.778790 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.778799 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:23.778808 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:23.778816 | orchestrator | 2026-03-05 01:00:23.778825 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-05 01:00:23.778834 | orchestrator | Thursday 05 March 2026 00:58:24 +0000 (0:00:00.468) 0:00:23.177 ******** 2026-03-05 01:00:23.778842 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.778851 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:23.778860 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:23.778868 | orchestrator | 2026-03-05 01:00:23.778877 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-05 01:00:23.778886 | orchestrator | Thursday 05 March 2026 00:58:25 +0000 (0:00:00.704) 0:00:23.882 ******** 2026-03-05 01:00:23.778895 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-05 01:00:23.778904 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-05 01:00:23.778913 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-05 01:00:23.778922 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-05 01:00:23.778944 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-05 01:00:23.778953 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-05 01:00:23.778962 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-05 01:00:23.778971 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-05 01:00:23.778979 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-05 01:00:23.778988 | orchestrator | 2026-03-05 01:00:23.778997 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-05 01:00:23.779006 | orchestrator | Thursday 05 March 2026 00:58:26 +0000 (0:00:00.941) 0:00:24.823 ******** 2026-03-05 01:00:23.779014 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-05 01:00:23.779023 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-05 01:00:23.779032 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-05 01:00:23.779043 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.779064 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-05 01:00:23.779082 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-05 01:00:23.779097 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-05 01:00:23.779110 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:23.779126 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-05 01:00:23.779171 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-05 01:00:23.779187 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-05 01:00:23.779202 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:23.779218 | orchestrator | 2026-03-05 01:00:23.779233 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-05 01:00:23.779247 | orchestrator | Thursday 05 March 2026 00:58:26 +0000 (0:00:00.411) 0:00:25.234 ******** 2026-03-05 01:00:23.779262 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:23.779277 | orchestrator | 2026-03-05 01:00:23.779301 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-05 01:00:23.779319 | orchestrator | Thursday 05 March 2026 00:58:27 +0000 (0:00:00.775) 0:00:26.010 ******** 2026-03-05 01:00:23.779335 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.779350 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:23.779365 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:23.779381 | orchestrator | 2026-03-05 01:00:23.779396 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-05 01:00:23.779411 | orchestrator | Thursday 05 March 2026 00:58:27 +0000 (0:00:00.350) 0:00:26.360 ******** 2026-03-05 01:00:23.779422 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.779430 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:23.779439 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:23.779448 | orchestrator | 2026-03-05 01:00:23.779461 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-05 01:00:23.779476 | orchestrator | Thursday 05 March 2026 00:58:28 +0000 (0:00:00.366) 0:00:26.727 ******** 2026-03-05 01:00:23.779491 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.779505 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:23.779520 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:23.779535 | orchestrator | 2026-03-05 01:00:23.779549 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-05 01:00:23.779565 | orchestrator | Thursday 05 March 2026 00:58:28 +0000 (0:00:00.362) 0:00:27.089 ******** 2026-03-05 01:00:23.779581 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:23.779596 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:23.779605 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:23.779613 | orchestrator | 2026-03-05 01:00:23.779622 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-05 01:00:23.779640 | orchestrator | Thursday 05 March 2026 00:58:29 +0000 (0:00:00.773) 0:00:27.863 ******** 2026-03-05 01:00:23.779648 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 01:00:23.779657 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 01:00:23.779666 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 01:00:23.779674 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.779683 | orchestrator | 2026-03-05 01:00:23.779692 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-05 01:00:23.779700 | orchestrator | Thursday 05 March 2026 00:58:29 +0000 (0:00:00.530) 0:00:28.393 ******** 2026-03-05 01:00:23.779709 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 01:00:23.779718 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 01:00:23.779726 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 01:00:23.779735 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.779743 | orchestrator | 2026-03-05 01:00:23.779752 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-05 01:00:23.779761 | orchestrator | Thursday 05 March 2026 00:58:30 +0000 (0:00:00.442) 0:00:28.835 ******** 2026-03-05 01:00:23.779770 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 01:00:23.779778 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 01:00:23.779787 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 01:00:23.779795 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.779804 | orchestrator | 2026-03-05 01:00:23.779813 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-05 01:00:23.779822 | orchestrator | Thursday 05 March 2026 00:58:30 +0000 (0:00:00.411) 0:00:29.247 ******** 2026-03-05 01:00:23.779830 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:23.779839 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:23.779848 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:23.779856 | orchestrator | 2026-03-05 01:00:23.779865 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-05 01:00:23.779880 | orchestrator | Thursday 05 March 2026 00:58:31 +0000 (0:00:00.410) 0:00:29.658 ******** 2026-03-05 01:00:23.779889 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-05 01:00:23.779897 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-05 01:00:23.779906 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-05 01:00:23.779915 | orchestrator | 2026-03-05 01:00:23.779924 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-05 01:00:23.779932 | orchestrator | Thursday 05 March 2026 00:58:31 +0000 (0:00:00.646) 0:00:30.305 ******** 2026-03-05 01:00:23.779941 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-05 01:00:23.779950 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-05 01:00:23.779959 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-05 01:00:23.779967 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-05 01:00:23.779976 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-05 01:00:23.779985 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-05 01:00:23.779994 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-05 01:00:23.780002 | orchestrator | 2026-03-05 01:00:23.780011 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-05 01:00:23.780020 | orchestrator | Thursday 05 March 2026 00:58:32 +0000 (0:00:01.086) 0:00:31.392 ******** 2026-03-05 01:00:23.780028 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-05 01:00:23.780037 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-05 01:00:23.780051 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-05 01:00:23.780060 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-05 01:00:23.780076 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-05 01:00:23.780085 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-05 01:00:23.780094 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-05 01:00:23.780102 | orchestrator | 2026-03-05 01:00:23.780111 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-05 01:00:23.780120 | orchestrator | Thursday 05 March 2026 00:58:35 +0000 (0:00:02.355) 0:00:33.747 ******** 2026-03-05 01:00:23.780175 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:23.780186 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:23.780195 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-05 01:00:23.780204 | orchestrator | 2026-03-05 01:00:23.780212 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-05 01:00:23.780221 | orchestrator | Thursday 05 March 2026 00:58:35 +0000 (0:00:00.497) 0:00:34.244 ******** 2026-03-05 01:00:23.780233 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-05 01:00:23.780249 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-05 01:00:23.780271 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-05 01:00:23.780289 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-05 01:00:23.780304 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-05 01:00:23.780319 | orchestrator | 2026-03-05 01:00:23.780334 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-05 01:00:23.780348 | orchestrator | Thursday 05 March 2026 00:59:22 +0000 (0:00:47.012) 0:01:21.256 ******** 2026-03-05 01:00:23.780361 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:23.780376 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:23.780398 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:23.780413 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:23.780428 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:23.780442 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:23.780457 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-05 01:00:23.780471 | orchestrator | 2026-03-05 01:00:23.780486 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-05 01:00:23.780512 | orchestrator | Thursday 05 March 2026 00:59:48 +0000 (0:00:26.117) 0:01:47.373 ******** 2026-03-05 01:00:23.780527 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:23.780541 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:23.780554 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:23.780567 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:23.780581 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:23.780594 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:23.780608 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-05 01:00:23.780621 | orchestrator | 2026-03-05 01:00:23.780630 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-05 01:00:23.780638 | orchestrator | Thursday 05 March 2026 01:00:01 +0000 (0:00:12.985) 0:02:00.359 ******** 2026-03-05 01:00:23.780646 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:23.780653 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-05 01:00:23.780661 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-05 01:00:23.780678 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:23.780687 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-05 01:00:23.780698 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-05 01:00:23.780711 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:23.780731 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-05 01:00:23.780746 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-05 01:00:23.780758 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:23.780769 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-05 01:00:23.780781 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-05 01:00:23.780793 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:23.780805 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-05 01:00:23.780816 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-05 01:00:23.780828 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:23.780843 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-05 01:00:23.780856 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-05 01:00:23.780870 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-05 01:00:23.780883 | orchestrator | 2026-03-05 01:00:23.780892 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:00:23.780900 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-05 01:00:23.780909 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-05 01:00:23.780924 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-05 01:00:23.780946 | orchestrator | 2026-03-05 01:00:23.780961 | orchestrator | 2026-03-05 01:00:23.780974 | orchestrator | 2026-03-05 01:00:23.780987 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:00:23.781011 | orchestrator | Thursday 05 March 2026 01:00:20 +0000 (0:00:18.845) 0:02:19.204 ******** 2026-03-05 01:00:23.781024 | orchestrator | =============================================================================== 2026-03-05 01:00:23.781036 | orchestrator | create openstack pool(s) ----------------------------------------------- 47.01s 2026-03-05 01:00:23.781048 | orchestrator | generate keys ---------------------------------------------------------- 26.12s 2026-03-05 01:00:23.781061 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.85s 2026-03-05 01:00:23.781074 | orchestrator | get keys from monitors ------------------------------------------------- 12.99s 2026-03-05 01:00:23.781087 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.36s 2026-03-05 01:00:23.781100 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.36s 2026-03-05 01:00:23.781121 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 2.18s 2026-03-05 01:00:23.781158 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.09s 2026-03-05 01:00:23.781174 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.94s 2026-03-05 01:00:23.781189 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.90s 2026-03-05 01:00:23.781203 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.89s 2026-03-05 01:00:23.781219 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.84s 2026-03-05 01:00:23.781234 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.79s 2026-03-05 01:00:23.781250 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.79s 2026-03-05 01:00:23.781264 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.78s 2026-03-05 01:00:23.781276 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.77s 2026-03-05 01:00:23.781284 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.72s 2026-03-05 01:00:23.781292 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.71s 2026-03-05 01:00:23.781300 | orchestrator | ceph-facts : Set osd_pool_default_crush_rule fact ----------------------- 0.71s 2026-03-05 01:00:23.781308 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.68s 2026-03-05 01:00:23.781316 | orchestrator | 2026-03-05 01:00:23 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:26.814744 | orchestrator | 2026-03-05 01:00:26 | INFO  | Task f4bf84bd-7ada-4ae7-8290-b64245076069 is in state STARTED 2026-03-05 01:00:26.816268 | orchestrator | 2026-03-05 01:00:26 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:00:26.817560 | orchestrator | 2026-03-05 01:00:26 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:00:26.817592 | orchestrator | 2026-03-05 01:00:26 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:29.848814 | orchestrator | 2026-03-05 01:00:29 | INFO  | Task f4bf84bd-7ada-4ae7-8290-b64245076069 is in state STARTED 2026-03-05 01:00:29.850358 | orchestrator | 2026-03-05 01:00:29 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:00:29.852521 | orchestrator | 2026-03-05 01:00:29 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:00:29.852562 | orchestrator | 2026-03-05 01:00:29 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:32.887363 | orchestrator | 2026-03-05 01:00:32 | INFO  | Task f4bf84bd-7ada-4ae7-8290-b64245076069 is in state STARTED 2026-03-05 01:00:32.888326 | orchestrator | 2026-03-05 01:00:32 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:00:32.889253 | orchestrator | 2026-03-05 01:00:32 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:00:32.889325 | orchestrator | 2026-03-05 01:00:32 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:35.925222 | orchestrator | 2026-03-05 01:00:35 | INFO  | Task f4bf84bd-7ada-4ae7-8290-b64245076069 is in state STARTED 2026-03-05 01:00:35.926275 | orchestrator | 2026-03-05 01:00:35 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:00:35.927951 | orchestrator | 2026-03-05 01:00:35 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:00:35.927994 | orchestrator | 2026-03-05 01:00:35 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:38.965788 | orchestrator | 2026-03-05 01:00:38 | INFO  | Task f4bf84bd-7ada-4ae7-8290-b64245076069 is in state STARTED 2026-03-05 01:00:38.966440 | orchestrator | 2026-03-05 01:00:38 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:00:38.966806 | orchestrator | 2026-03-05 01:00:38 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:00:38.966824 | orchestrator | 2026-03-05 01:00:38 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:42.017768 | orchestrator | 2026-03-05 01:00:42 | INFO  | Task f4bf84bd-7ada-4ae7-8290-b64245076069 is in state STARTED 2026-03-05 01:00:42.017864 | orchestrator | 2026-03-05 01:00:42 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:00:42.019527 | orchestrator | 2026-03-05 01:00:42 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:00:42.019567 | orchestrator | 2026-03-05 01:00:42 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:45.065252 | orchestrator | 2026-03-05 01:00:45 | INFO  | Task f4bf84bd-7ada-4ae7-8290-b64245076069 is in state STARTED 2026-03-05 01:00:45.066403 | orchestrator | 2026-03-05 01:00:45 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:00:45.067862 | orchestrator | 2026-03-05 01:00:45 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:00:45.067912 | orchestrator | 2026-03-05 01:00:45 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:48.111010 | orchestrator | 2026-03-05 01:00:48 | INFO  | Task f4bf84bd-7ada-4ae7-8290-b64245076069 is in state STARTED 2026-03-05 01:00:48.112121 | orchestrator | 2026-03-05 01:00:48 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:00:48.114108 | orchestrator | 2026-03-05 01:00:48 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:00:48.114183 | orchestrator | 2026-03-05 01:00:48 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:51.160055 | orchestrator | 2026-03-05 01:00:51 | INFO  | Task f4bf84bd-7ada-4ae7-8290-b64245076069 is in state STARTED 2026-03-05 01:00:51.165221 | orchestrator | 2026-03-05 01:00:51 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:00:51.166171 | orchestrator | 2026-03-05 01:00:51 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:00:51.166223 | orchestrator | 2026-03-05 01:00:51 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:54.210277 | orchestrator | 2026-03-05 01:00:54 | INFO  | Task f4bf84bd-7ada-4ae7-8290-b64245076069 is in state STARTED 2026-03-05 01:00:54.210737 | orchestrator | 2026-03-05 01:00:54 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:00:54.212331 | orchestrator | 2026-03-05 01:00:54 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:00:54.212407 | orchestrator | 2026-03-05 01:00:54 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:57.257577 | orchestrator | 2026-03-05 01:00:57 | INFO  | Task f4bf84bd-7ada-4ae7-8290-b64245076069 is in state STARTED 2026-03-05 01:00:57.259316 | orchestrator | 2026-03-05 01:00:57 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:00:57.260542 | orchestrator | 2026-03-05 01:00:57 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:00:57.260584 | orchestrator | 2026-03-05 01:00:57 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:00.297723 | orchestrator | 2026-03-05 01:01:00 | INFO  | Task f4bf84bd-7ada-4ae7-8290-b64245076069 is in state STARTED 2026-03-05 01:01:00.299925 | orchestrator | 2026-03-05 01:01:00 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:01:00.301956 | orchestrator | 2026-03-05 01:01:00 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:01:00.302048 | orchestrator | 2026-03-05 01:01:00 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:03.351409 | orchestrator | 2026-03-05 01:01:03 | INFO  | Task f4bf84bd-7ada-4ae7-8290-b64245076069 is in state STARTED 2026-03-05 01:01:03.352916 | orchestrator | 2026-03-05 01:01:03 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:01:03.355338 | orchestrator | 2026-03-05 01:01:03 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:01:03.355538 | orchestrator | 2026-03-05 01:01:03 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:06.407496 | orchestrator | 2026-03-05 01:01:06 | INFO  | Task f4bf84bd-7ada-4ae7-8290-b64245076069 is in state SUCCESS 2026-03-05 01:01:06.408085 | orchestrator | 2026-03-05 01:01:06 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:01:06.410359 | orchestrator | 2026-03-05 01:01:06 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:01:06.410563 | orchestrator | 2026-03-05 01:01:06 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:09.464443 | orchestrator | 2026-03-05 01:01:09 | INFO  | Task bbe8c570-327a-4bcd-96cb-ee421af3a2a7 is in state STARTED 2026-03-05 01:01:09.465900 | orchestrator | 2026-03-05 01:01:09 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:01:09.467485 | orchestrator | 2026-03-05 01:01:09 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:01:09.467545 | orchestrator | 2026-03-05 01:01:09 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:12.507324 | orchestrator | 2026-03-05 01:01:12 | INFO  | Task bbe8c570-327a-4bcd-96cb-ee421af3a2a7 is in state STARTED 2026-03-05 01:01:12.509502 | orchestrator | 2026-03-05 01:01:12 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:01:12.514259 | orchestrator | 2026-03-05 01:01:12 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:01:12.514318 | orchestrator | 2026-03-05 01:01:12 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:15.557774 | orchestrator | 2026-03-05 01:01:15 | INFO  | Task bbe8c570-327a-4bcd-96cb-ee421af3a2a7 is in state STARTED 2026-03-05 01:01:15.559813 | orchestrator | 2026-03-05 01:01:15 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:01:15.559880 | orchestrator | 2026-03-05 01:01:15 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:01:15.559889 | orchestrator | 2026-03-05 01:01:15 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:18.605227 | orchestrator | 2026-03-05 01:01:18 | INFO  | Task bbe8c570-327a-4bcd-96cb-ee421af3a2a7 is in state STARTED 2026-03-05 01:01:18.607537 | orchestrator | 2026-03-05 01:01:18 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state STARTED 2026-03-05 01:01:18.611063 | orchestrator | 2026-03-05 01:01:18 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:01:18.611116 | orchestrator | 2026-03-05 01:01:18 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:21.658413 | orchestrator | 2026-03-05 01:01:21 | INFO  | Task bbe8c570-327a-4bcd-96cb-ee421af3a2a7 is in state STARTED 2026-03-05 01:01:21.663066 | orchestrator | 2026-03-05 01:01:21.663127 | orchestrator | 2026-03-05 01:01:21.663134 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-05 01:01:21.663163 | orchestrator | 2026-03-05 01:01:21.663168 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-05 01:01:21.663174 | orchestrator | Thursday 05 March 2026 01:00:25 +0000 (0:00:00.184) 0:00:00.184 ******** 2026-03-05 01:01:21.663179 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-05 01:01:21.663185 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-05 01:01:21.663190 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-05 01:01:21.663195 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-05 01:01:21.663200 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-05 01:01:21.663204 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-05 01:01:21.663209 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-05 01:01:21.663213 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-05 01:01:21.663217 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-05 01:01:21.663221 | orchestrator | 2026-03-05 01:01:21.663226 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-05 01:01:21.663231 | orchestrator | Thursday 05 March 2026 01:00:30 +0000 (0:00:04.851) 0:00:05.036 ******** 2026-03-05 01:01:21.663235 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-05 01:01:21.663240 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-05 01:01:21.663244 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-05 01:01:21.663248 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-05 01:01:21.663253 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-05 01:01:21.663257 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-05 01:01:21.663261 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-05 01:01:21.663266 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-05 01:01:21.663305 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-05 01:01:21.663310 | orchestrator | 2026-03-05 01:01:21.663318 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-05 01:01:21.663325 | orchestrator | Thursday 05 March 2026 01:00:35 +0000 (0:00:04.418) 0:00:09.454 ******** 2026-03-05 01:01:21.663334 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-05 01:01:21.663366 | orchestrator | 2026-03-05 01:01:21.663375 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-05 01:01:21.663383 | orchestrator | Thursday 05 March 2026 01:00:36 +0000 (0:00:01.340) 0:00:10.795 ******** 2026-03-05 01:01:21.663391 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-05 01:01:21.663634 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-05 01:01:21.663649 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-05 01:01:21.663656 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-05 01:01:21.663663 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-05 01:01:21.663669 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-05 01:01:21.663676 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-05 01:01:21.663683 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-05 01:01:21.663690 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-05 01:01:21.663696 | orchestrator | 2026-03-05 01:01:21.663703 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-05 01:01:21.663710 | orchestrator | Thursday 05 March 2026 01:00:53 +0000 (0:00:17.358) 0:00:28.154 ******** 2026-03-05 01:01:21.663717 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-05 01:01:21.663724 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-05 01:01:21.663731 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-05 01:01:21.663738 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-05 01:01:21.663757 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-05 01:01:21.663764 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-05 01:01:21.663771 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-05 01:01:21.663777 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-05 01:01:21.663784 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-05 01:01:21.663791 | orchestrator | 2026-03-05 01:01:21.663798 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-05 01:01:21.663804 | orchestrator | Thursday 05 March 2026 01:00:57 +0000 (0:00:03.466) 0:00:31.620 ******** 2026-03-05 01:01:21.663812 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-05 01:01:21.663819 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-05 01:01:21.663826 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-05 01:01:21.663832 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-05 01:01:21.663839 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-05 01:01:21.663846 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-05 01:01:21.663853 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-05 01:01:21.663859 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-05 01:01:21.663866 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-05 01:01:21.663873 | orchestrator | 2026-03-05 01:01:21.663880 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:01:21.663896 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:01:21.663904 | orchestrator | 2026-03-05 01:01:21.663912 | orchestrator | 2026-03-05 01:01:21.663919 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:01:21.663925 | orchestrator | Thursday 05 March 2026 01:01:05 +0000 (0:00:07.730) 0:00:39.351 ******** 2026-03-05 01:01:21.663933 | orchestrator | =============================================================================== 2026-03-05 01:01:21.663938 | orchestrator | Write ceph keys to the share directory --------------------------------- 17.36s 2026-03-05 01:01:21.663942 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.73s 2026-03-05 01:01:21.663947 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.85s 2026-03-05 01:01:21.663951 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.42s 2026-03-05 01:01:21.663955 | orchestrator | Check if target directories exist --------------------------------------- 3.47s 2026-03-05 01:01:21.663960 | orchestrator | Create share directory -------------------------------------------------- 1.34s 2026-03-05 01:01:21.663964 | orchestrator | 2026-03-05 01:01:21.663968 | orchestrator | 2026-03-05 01:01:21.663972 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:01:21.663977 | orchestrator | 2026-03-05 01:01:21.663981 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:01:21.663985 | orchestrator | Thursday 05 March 2026 00:59:19 +0000 (0:00:00.305) 0:00:00.305 ******** 2026-03-05 01:01:21.663990 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:21.663994 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:01:21.663999 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:01:21.664003 | orchestrator | 2026-03-05 01:01:21.664007 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:01:21.664017 | orchestrator | Thursday 05 March 2026 00:59:19 +0000 (0:00:00.313) 0:00:00.618 ******** 2026-03-05 01:01:21.664021 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-05 01:01:21.664026 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-05 01:01:21.664030 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-05 01:01:21.664035 | orchestrator | 2026-03-05 01:01:21.664039 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-05 01:01:21.664044 | orchestrator | 2026-03-05 01:01:21.664048 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-05 01:01:21.664052 | orchestrator | Thursday 05 March 2026 00:59:19 +0000 (0:00:00.465) 0:00:01.084 ******** 2026-03-05 01:01:21.664057 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:01:21.664061 | orchestrator | 2026-03-05 01:01:21.664065 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-05 01:01:21.664070 | orchestrator | Thursday 05 March 2026 00:59:20 +0000 (0:00:00.589) 0:00:01.674 ******** 2026-03-05 01:01:21.664088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-05 01:01:21.664110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-05 01:01:21.664121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-05 01:01:21.664130 | orchestrator | 2026-03-05 01:01:21.664135 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-05 01:01:21.664178 | orchestrator | Thursday 05 March 2026 00:59:21 +0000 (0:00:01.220) 0:00:02.895 ******** 2026-03-05 01:01:21.664184 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:21.664188 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:01:21.664192 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:01:21.664197 | orchestrator | 2026-03-05 01:01:21.664201 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-05 01:01:21.664205 | orchestrator | Thursday 05 March 2026 00:59:22 +0000 (0:00:00.526) 0:00:03.421 ******** 2026-03-05 01:01:21.664210 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-05 01:01:21.664214 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-05 01:01:21.664218 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-05 01:01:21.664223 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-05 01:01:21.664227 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-05 01:01:21.664231 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-05 01:01:21.664239 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-05 01:01:21.664243 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-05 01:01:21.664248 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-05 01:01:21.664252 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-05 01:01:21.664256 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-05 01:01:21.664261 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-05 01:01:21.664267 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-05 01:01:21.664272 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-05 01:01:21.664277 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-05 01:01:21.664282 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-05 01:01:21.664292 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-05 01:01:21.664297 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-05 01:01:21.664302 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-05 01:01:21.664307 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-05 01:01:21.664312 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-05 01:01:21.664317 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-05 01:01:21.664326 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-05 01:01:21.664331 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-05 01:01:21.664338 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-05 01:01:21.664345 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-05 01:01:21.664350 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-05 01:01:21.664356 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-05 01:01:21.664361 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-05 01:01:21.664366 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-05 01:01:21.664371 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-05 01:01:21.664377 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-05 01:01:21.664383 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-05 01:01:21.664388 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-05 01:01:21.664393 | orchestrator | 2026-03-05 01:01:21.664398 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-05 01:01:21.664403 | orchestrator | Thursday 05 March 2026 00:59:22 +0000 (0:00:00.833) 0:00:04.255 ******** 2026-03-05 01:01:21.664408 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:21.664412 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:01:21.664416 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:01:21.664420 | orchestrator | 2026-03-05 01:01:21.664425 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-05 01:01:21.664429 | orchestrator | Thursday 05 March 2026 00:59:23 +0000 (0:00:00.349) 0:00:04.604 ******** 2026-03-05 01:01:21.664433 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:21.664438 | orchestrator | 2026-03-05 01:01:21.664442 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-05 01:01:21.664446 | orchestrator | Thursday 05 March 2026 00:59:23 +0000 (0:00:00.131) 0:00:04.736 ******** 2026-03-05 01:01:21.664451 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:21.664455 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:21.664459 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:21.664463 | orchestrator | 2026-03-05 01:01:21.664468 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-05 01:01:21.664476 | orchestrator | Thursday 05 March 2026 00:59:23 +0000 (0:00:00.539) 0:00:05.275 ******** 2026-03-05 01:01:21.664480 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:21.664484 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:01:21.664492 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:01:21.664496 | orchestrator | 2026-03-05 01:01:21.664500 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-05 01:01:21.664505 | orchestrator | Thursday 05 March 2026 00:59:24 +0000 (0:00:00.344) 0:00:05.620 ******** 2026-03-05 01:01:21.664509 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:21.664513 | orchestrator | 2026-03-05 01:01:21.664517 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-05 01:01:21.664522 | orchestrator | Thursday 05 March 2026 00:59:24 +0000 (0:00:00.125) 0:00:05.745 ******** 2026-03-05 01:01:21.664526 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:21.664530 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:21.664535 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:21.664539 | orchestrator | 2026-03-05 01:01:21.664543 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-05 01:01:21.664548 | orchestrator | Thursday 05 March 2026 00:59:24 +0000 (0:00:00.301) 0:00:06.046 ******** 2026-03-05 01:01:21.664552 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:21.664556 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:01:21.664560 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:01:21.664565 | orchestrator | 2026-03-05 01:01:21.664569 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-05 01:01:21.664573 | orchestrator | Thursday 05 March 2026 00:59:25 +0000 (0:00:00.403) 0:00:06.450 ******** 2026-03-05 01:01:21.664578 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:21.664582 | orchestrator | 2026-03-05 01:01:21.664586 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-05 01:01:21.664590 | orchestrator | Thursday 05 March 2026 00:59:25 +0000 (0:00:00.407) 0:00:06.858 ******** 2026-03-05 01:01:21.664597 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:21.664603 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:21.664610 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:21.664614 | orchestrator | 2026-03-05 01:01:21.664618 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-05 01:01:21.664626 | orchestrator | Thursday 05 March 2026 00:59:25 +0000 (0:00:00.328) 0:00:07.186 ******** 2026-03-05 01:01:21.664630 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:21.664634 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:01:21.664639 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:01:21.664643 | orchestrator | 2026-03-05 01:01:21.664647 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-05 01:01:21.664652 | orchestrator | Thursday 05 March 2026 00:59:26 +0000 (0:00:00.358) 0:00:07.544 ******** 2026-03-05 01:01:21.664656 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:21.664660 | orchestrator | 2026-03-05 01:01:21.664665 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-05 01:01:21.664669 | orchestrator | Thursday 05 March 2026 00:59:26 +0000 (0:00:00.161) 0:00:07.706 ******** 2026-03-05 01:01:21.664673 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:21.664677 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:21.664681 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:21.664686 | orchestrator | 2026-03-05 01:01:21.664690 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-05 01:01:21.664694 | orchestrator | Thursday 05 March 2026 00:59:26 +0000 (0:00:00.347) 0:00:08.053 ******** 2026-03-05 01:01:21.664699 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:21.664703 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:01:21.664707 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:01:21.664711 | orchestrator | 2026-03-05 01:01:21.664716 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-05 01:01:21.664724 | orchestrator | Thursday 05 March 2026 00:59:27 +0000 (0:00:00.561) 0:00:08.615 ******** 2026-03-05 01:01:21.664728 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:21.664733 | orchestrator | 2026-03-05 01:01:21.664737 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-05 01:01:21.664741 | orchestrator | Thursday 05 March 2026 00:59:27 +0000 (0:00:00.134) 0:00:08.750 ******** 2026-03-05 01:01:21.664745 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:21.664750 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:21.664754 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:21.664758 | orchestrator | 2026-03-05 01:01:21.664762 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-05 01:01:21.664767 | orchestrator | Thursday 05 March 2026 00:59:27 +0000 (0:00:00.315) 0:00:09.066 ******** 2026-03-05 01:01:21.664772 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:21.664779 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:01:21.664786 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:01:21.664793 | orchestrator | 2026-03-05 01:01:21.664799 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-05 01:01:21.664806 | orchestrator | Thursday 05 March 2026 00:59:28 +0000 (0:00:00.380) 0:00:09.446 ******** 2026-03-05 01:01:21.664813 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:21.664820 | orchestrator | 2026-03-05 01:01:21.664827 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-05 01:01:21.664834 | orchestrator | Thursday 05 March 2026 00:59:28 +0000 (0:00:00.140) 0:00:09.587 ******** 2026-03-05 01:01:21.664841 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:21.664848 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:21.664857 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:21.664861 | orchestrator | 2026-03-05 01:01:21.664865 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-05 01:01:21.664870 | orchestrator | Thursday 05 March 2026 00:59:28 +0000 (0:00:00.349) 0:00:09.936 ******** 2026-03-05 01:01:21.664874 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:21.664878 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:01:21.664883 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:01:21.664887 | orchestrator | 2026-03-05 01:01:21.664891 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-05 01:01:21.664896 | orchestrator | Thursday 05 March 2026 00:59:29 +0000 (0:00:00.584) 0:00:10.521 ******** 2026-03-05 01:01:21.664900 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:21.664904 | orchestrator | 2026-03-05 01:01:21.664908 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-05 01:01:21.664913 | orchestrator | Thursday 05 March 2026 00:59:29 +0000 (0:00:00.158) 0:00:10.680 ******** 2026-03-05 01:01:21.664920 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:21.664925 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:21.664929 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:21.664933 | orchestrator | 2026-03-05 01:01:21.664937 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-05 01:01:21.664942 | orchestrator | Thursday 05 March 2026 00:59:29 +0000 (0:00:00.322) 0:00:11.002 ******** 2026-03-05 01:01:21.664946 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:21.664950 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:01:21.664954 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:01:21.664959 | orchestrator | 2026-03-05 01:01:21.664963 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-05 01:01:21.664967 | orchestrator | Thursday 05 March 2026 00:59:30 +0000 (0:00:00.380) 0:00:11.383 ******** 2026-03-05 01:01:21.664972 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:21.664976 | orchestrator | 2026-03-05 01:01:21.664980 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-05 01:01:21.664985 | orchestrator | Thursday 05 March 2026 00:59:30 +0000 (0:00:00.160) 0:00:11.544 ******** 2026-03-05 01:01:21.664989 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:21.664997 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:21.665001 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:21.665005 | orchestrator | 2026-03-05 01:01:21.665010 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-05 01:01:21.665014 | orchestrator | Thursday 05 March 2026 00:59:30 +0000 (0:00:00.552) 0:00:12.096 ******** 2026-03-05 01:01:21.665018 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:21.665023 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:01:21.665027 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:01:21.665031 | orchestrator | 2026-03-05 01:01:21.665036 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-05 01:01:21.665040 | orchestrator | Thursday 05 March 2026 00:59:31 +0000 (0:00:00.374) 0:00:12.471 ******** 2026-03-05 01:01:21.665044 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:21.665048 | orchestrator | 2026-03-05 01:01:21.665056 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-05 01:01:21.665061 | orchestrator | Thursday 05 March 2026 00:59:31 +0000 (0:00:00.139) 0:00:12.610 ******** 2026-03-05 01:01:21.665065 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:21.665069 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:21.665074 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:21.665078 | orchestrator | 2026-03-05 01:01:21.665082 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-05 01:01:21.665086 | orchestrator | Thursday 05 March 2026 00:59:31 +0000 (0:00:00.309) 0:00:12.920 ******** 2026-03-05 01:01:21.665091 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:21.665095 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:01:21.665099 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:01:21.665103 | orchestrator | 2026-03-05 01:01:21.665108 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-05 01:01:21.665112 | orchestrator | Thursday 05 March 2026 00:59:31 +0000 (0:00:00.324) 0:00:13.244 ******** 2026-03-05 01:01:21.665116 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:21.665120 | orchestrator | 2026-03-05 01:01:21.665125 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-05 01:01:21.665129 | orchestrator | Thursday 05 March 2026 00:59:32 +0000 (0:00:00.162) 0:00:13.406 ******** 2026-03-05 01:01:21.665133 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:21.665137 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:21.665162 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:21.665169 | orchestrator | 2026-03-05 01:01:21.665176 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-05 01:01:21.665182 | orchestrator | Thursday 05 March 2026 00:59:32 +0000 (0:00:00.587) 0:00:13.994 ******** 2026-03-05 01:01:21.665199 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:01:21.665212 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:01:21.665219 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:01:21.665226 | orchestrator | 2026-03-05 01:01:21.665233 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-05 01:01:21.665240 | orchestrator | Thursday 05 March 2026 00:59:34 +0000 (0:00:02.002) 0:00:15.997 ******** 2026-03-05 01:01:21.665247 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-05 01:01:21.665254 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-05 01:01:21.665260 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-05 01:01:21.665267 | orchestrator | 2026-03-05 01:01:21.665274 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-05 01:01:21.665281 | orchestrator | Thursday 05 March 2026 00:59:37 +0000 (0:00:02.591) 0:00:18.588 ******** 2026-03-05 01:01:21.665288 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-05 01:01:21.665295 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-05 01:01:21.665307 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-05 01:01:21.665314 | orchestrator | 2026-03-05 01:01:21.665321 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-05 01:01:21.665328 | orchestrator | Thursday 05 March 2026 00:59:39 +0000 (0:00:02.679) 0:00:21.267 ******** 2026-03-05 01:01:21.665335 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-05 01:01:21.665342 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-05 01:01:21.665348 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-05 01:01:21.665355 | orchestrator | 2026-03-05 01:01:21.665362 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-05 01:01:21.665372 | orchestrator | Thursday 05 March 2026 00:59:42 +0000 (0:00:02.309) 0:00:23.577 ******** 2026-03-05 01:01:21.665379 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:21.665386 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:21.665392 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:21.665399 | orchestrator | 2026-03-05 01:01:21.665406 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-05 01:01:21.665413 | orchestrator | Thursday 05 March 2026 00:59:42 +0000 (0:00:00.341) 0:00:23.919 ******** 2026-03-05 01:01:21.665420 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:21.665426 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:21.665433 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:21.665440 | orchestrator | 2026-03-05 01:01:21.665446 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-05 01:01:21.665453 | orchestrator | Thursday 05 March 2026 00:59:42 +0000 (0:00:00.304) 0:00:24.224 ******** 2026-03-05 01:01:21.665460 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:01:21.665467 | orchestrator | 2026-03-05 01:01:21.665473 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-05 01:01:21.665480 | orchestrator | Thursday 05 March 2026 00:59:43 +0000 (0:00:00.893) 0:00:25.117 ******** 2026-03-05 01:01:21.665495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-05 01:01:21.665512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-05 01:01:21.665524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BL2026-03-05 01:01:21 | INFO  | Task b20a26f7-7de7-4cb5-ad3c-d65a6a176e91 is in state SUCCESS 2026-03-05 01:01:21.665533 | orchestrator | AZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-05 01:01:21.665545 | orchestrator | 2026-03-05 01:01:21.665553 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-05 01:01:21.665559 | orchestrator | Thursday 05 March 2026 00:59:45 +0000 (0:00:01.578) 0:00:26.695 ******** 2026-03-05 01:01:21.665578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-05 01:01:21.665585 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:21.665593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-05 01:01:21.665609 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:21.665625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-05 01:01:21.665632 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:21.665640 | orchestrator | 2026-03-05 01:01:21.665647 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-05 01:01:21.665654 | orchestrator | Thursday 05 March 2026 00:59:46 +0000 (0:00:00.764) 0:00:27.460 ******** 2026-03-05 01:01:21.665673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-05 01:01:21.665679 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:21.665687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-05 01:01:21.665696 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:21.665704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-05 01:01:21.665709 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:21.665714 | orchestrator | 2026-03-05 01:01:21.665718 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-05 01:01:21.665723 | orchestrator | Thursday 05 March 2026 00:59:47 +0000 (0:00:00.960) 0:00:28.420 ******** 2026-03-05 01:01:21.665731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-05 01:01:21.665743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-05 01:01:21.665753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-05 01:01:21.665762 | orchestrator | 2026-03-05 01:01:21.665766 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-05 01:01:21.665771 | orchestrator | Thursday 05 March 2026 00:59:48 +0000 (0:00:01.702) 0:00:30.123 ******** 2026-03-05 01:01:21.665775 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:21.665779 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:21.665784 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:21.665791 | orchestrator | 2026-03-05 01:01:21.665798 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-05 01:01:21.665805 | orchestrator | Thursday 05 March 2026 00:59:49 +0000 (0:00:00.476) 0:00:30.599 ******** 2026-03-05 01:01:21.665813 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:01:21.665820 | orchestrator | 2026-03-05 01:01:21.665827 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-05 01:01:21.665833 | orchestrator | Thursday 05 March 2026 00:59:49 +0000 (0:00:00.553) 0:00:31.153 ******** 2026-03-05 01:01:21.665838 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:01:21.665842 | orchestrator | 2026-03-05 01:01:21.665846 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-05 01:01:21.665851 | orchestrator | Thursday 05 March 2026 00:59:52 +0000 (0:00:02.868) 0:00:34.021 ******** 2026-03-05 01:01:21.665855 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:01:21.665859 | orchestrator | 2026-03-05 01:01:21.665864 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-05 01:01:21.665871 | orchestrator | Thursday 05 March 2026 00:59:55 +0000 (0:00:03.159) 0:00:37.181 ******** 2026-03-05 01:01:21.665875 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:01:21.665880 | orchestrator | 2026-03-05 01:01:21.665884 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-05 01:01:21.665888 | orchestrator | Thursday 05 March 2026 01:00:14 +0000 (0:00:18.356) 0:00:55.537 ******** 2026-03-05 01:01:21.665893 | orchestrator | 2026-03-05 01:01:21.665897 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-05 01:01:21.665901 | orchestrator | Thursday 05 March 2026 01:00:14 +0000 (0:00:00.093) 0:00:55.631 ******** 2026-03-05 01:01:21.665906 | orchestrator | 2026-03-05 01:01:21.665910 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-05 01:01:21.665914 | orchestrator | Thursday 05 March 2026 01:00:14 +0000 (0:00:00.087) 0:00:55.718 ******** 2026-03-05 01:01:21.665919 | orchestrator | 2026-03-05 01:01:21.665923 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-05 01:01:21.665927 | orchestrator | Thursday 05 March 2026 01:00:14 +0000 (0:00:00.084) 0:00:55.803 ******** 2026-03-05 01:01:21.665932 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:01:21.665941 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:01:21.665945 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:01:21.665949 | orchestrator | 2026-03-05 01:01:21.665954 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:01:21.665959 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-05 01:01:21.665964 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-05 01:01:21.665972 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-05 01:01:21.665976 | orchestrator | 2026-03-05 01:01:21.665981 | orchestrator | 2026-03-05 01:01:21.665985 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:01:21.665989 | orchestrator | Thursday 05 March 2026 01:01:20 +0000 (0:01:05.543) 0:02:01.346 ******** 2026-03-05 01:01:21.665994 | orchestrator | =============================================================================== 2026-03-05 01:01:21.665998 | orchestrator | horizon : Restart horizon container ------------------------------------ 65.54s 2026-03-05 01:01:21.666002 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 18.36s 2026-03-05 01:01:21.666007 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 3.16s 2026-03-05 01:01:21.666011 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.87s 2026-03-05 01:01:21.666056 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.68s 2026-03-05 01:01:21.666061 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.59s 2026-03-05 01:01:21.666065 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.31s 2026-03-05 01:01:21.666069 | orchestrator | horizon : Copying over config.json files for services ------------------- 2.00s 2026-03-05 01:01:21.666074 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.70s 2026-03-05 01:01:21.666078 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.58s 2026-03-05 01:01:21.666082 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.22s 2026-03-05 01:01:21.666087 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.96s 2026-03-05 01:01:21.666091 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.89s 2026-03-05 01:01:21.666096 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.83s 2026-03-05 01:01:21.666100 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.76s 2026-03-05 01:01:21.666104 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.59s 2026-03-05 01:01:21.666109 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.59s 2026-03-05 01:01:21.666113 | orchestrator | horizon : Update policy file name --------------------------------------- 0.58s 2026-03-05 01:01:21.666117 | orchestrator | horizon : Update policy file name --------------------------------------- 0.56s 2026-03-05 01:01:21.666122 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.55s 2026-03-05 01:01:21.666126 | orchestrator | 2026-03-05 01:01:21 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:01:21.666131 | orchestrator | 2026-03-05 01:01:21 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:24.717393 | orchestrator | 2026-03-05 01:01:24 | INFO  | Task bbe8c570-327a-4bcd-96cb-ee421af3a2a7 is in state STARTED 2026-03-05 01:01:24.719566 | orchestrator | 2026-03-05 01:01:24 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:01:24.719652 | orchestrator | 2026-03-05 01:01:24 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:27.770524 | orchestrator | 2026-03-05 01:01:27 | INFO  | Task bbe8c570-327a-4bcd-96cb-ee421af3a2a7 is in state STARTED 2026-03-05 01:01:27.772238 | orchestrator | 2026-03-05 01:01:27 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:01:27.772314 | orchestrator | 2026-03-05 01:01:27 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:30.818068 | orchestrator | 2026-03-05 01:01:30 | INFO  | Task bbe8c570-327a-4bcd-96cb-ee421af3a2a7 is in state STARTED 2026-03-05 01:01:30.819651 | orchestrator | 2026-03-05 01:01:30 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:01:30.819718 | orchestrator | 2026-03-05 01:01:30 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:33.865506 | orchestrator | 2026-03-05 01:01:33 | INFO  | Task bbe8c570-327a-4bcd-96cb-ee421af3a2a7 is in state STARTED 2026-03-05 01:01:33.868716 | orchestrator | 2026-03-05 01:01:33 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:01:33.869573 | orchestrator | 2026-03-05 01:01:33 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:36.915560 | orchestrator | 2026-03-05 01:01:36 | INFO  | Task bbe8c570-327a-4bcd-96cb-ee421af3a2a7 is in state STARTED 2026-03-05 01:01:36.916843 | orchestrator | 2026-03-05 01:01:36 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:01:36.916891 | orchestrator | 2026-03-05 01:01:36 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:39.961316 | orchestrator | 2026-03-05 01:01:39 | INFO  | Task bbe8c570-327a-4bcd-96cb-ee421af3a2a7 is in state STARTED 2026-03-05 01:01:39.963802 | orchestrator | 2026-03-05 01:01:39 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:01:39.963913 | orchestrator | 2026-03-05 01:01:39 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:42.997554 | orchestrator | 2026-03-05 01:01:42 | INFO  | Task bbe8c570-327a-4bcd-96cb-ee421af3a2a7 is in state STARTED 2026-03-05 01:01:43.000728 | orchestrator | 2026-03-05 01:01:42 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:01:43.000813 | orchestrator | 2026-03-05 01:01:42 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:46.053654 | orchestrator | 2026-03-05 01:01:46 | INFO  | Task bbe8c570-327a-4bcd-96cb-ee421af3a2a7 is in state STARTED 2026-03-05 01:01:46.053723 | orchestrator | 2026-03-05 01:01:46 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:01:46.053729 | orchestrator | 2026-03-05 01:01:46 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:49.100513 | orchestrator | 2026-03-05 01:01:49 | INFO  | Task bbe8c570-327a-4bcd-96cb-ee421af3a2a7 is in state STARTED 2026-03-05 01:01:49.102953 | orchestrator | 2026-03-05 01:01:49 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:01:49.103022 | orchestrator | 2026-03-05 01:01:49 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:52.143639 | orchestrator | 2026-03-05 01:01:52 | INFO  | Task bbe8c570-327a-4bcd-96cb-ee421af3a2a7 is in state STARTED 2026-03-05 01:01:52.145905 | orchestrator | 2026-03-05 01:01:52 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:01:52.145970 | orchestrator | 2026-03-05 01:01:52 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:55.185509 | orchestrator | 2026-03-05 01:01:55 | INFO  | Task bbe8c570-327a-4bcd-96cb-ee421af3a2a7 is in state STARTED 2026-03-05 01:01:55.188070 | orchestrator | 2026-03-05 01:01:55 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:01:55.188231 | orchestrator | 2026-03-05 01:01:55 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:58.234290 | orchestrator | 2026-03-05 01:01:58 | INFO  | Task bbe8c570-327a-4bcd-96cb-ee421af3a2a7 is in state STARTED 2026-03-05 01:01:58.235261 | orchestrator | 2026-03-05 01:01:58 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:01:58.235296 | orchestrator | 2026-03-05 01:01:58 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:01.285391 | orchestrator | 2026-03-05 01:02:01 | INFO  | Task bbe8c570-327a-4bcd-96cb-ee421af3a2a7 is in state STARTED 2026-03-05 01:02:01.288201 | orchestrator | 2026-03-05 01:02:01 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:02:01.288277 | orchestrator | 2026-03-05 01:02:01 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:04.333657 | orchestrator | 2026-03-05 01:02:04 | INFO  | Task bbe8c570-327a-4bcd-96cb-ee421af3a2a7 is in state STARTED 2026-03-05 01:02:04.335346 | orchestrator | 2026-03-05 01:02:04 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:02:04.335422 | orchestrator | 2026-03-05 01:02:04 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:07.380714 | orchestrator | 2026-03-05 01:02:07 | INFO  | Task bbe8c570-327a-4bcd-96cb-ee421af3a2a7 is in state SUCCESS 2026-03-05 01:02:07.382287 | orchestrator | 2026-03-05 01:02:07 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:02:07.382332 | orchestrator | 2026-03-05 01:02:07 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:10.437934 | orchestrator | 2026-03-05 01:02:10 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:02:10.439270 | orchestrator | 2026-03-05 01:02:10 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:02:10.440108 | orchestrator | 2026-03-05 01:02:10 | INFO  | Task 1a906ee2-8dc0-4933-b7ef-b6dae9a44063 is in state STARTED 2026-03-05 01:02:10.441526 | orchestrator | 2026-03-05 01:02:10 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:02:10.441569 | orchestrator | 2026-03-05 01:02:10 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:13.504642 | orchestrator | 2026-03-05 01:02:13 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:02:13.505840 | orchestrator | 2026-03-05 01:02:13 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:02:13.508387 | orchestrator | 2026-03-05 01:02:13 | INFO  | Task 1a906ee2-8dc0-4933-b7ef-b6dae9a44063 is in state STARTED 2026-03-05 01:02:13.510134 | orchestrator | 2026-03-05 01:02:13 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:02:13.510195 | orchestrator | 2026-03-05 01:02:13 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:16.589029 | orchestrator | 2026-03-05 01:02:16 | INFO  | Task dfdea0cb-5efb-4dbc-965b-da02d7a2fb6f is in state STARTED 2026-03-05 01:02:16.589208 | orchestrator | 2026-03-05 01:02:16 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:02:16.589226 | orchestrator | 2026-03-05 01:02:16 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:02:16.589236 | orchestrator | 2026-03-05 01:02:16 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:02:16.589245 | orchestrator | 2026-03-05 01:02:16 | INFO  | Task 1a906ee2-8dc0-4933-b7ef-b6dae9a44063 is in state SUCCESS 2026-03-05 01:02:16.589253 | orchestrator | 2026-03-05 01:02:16 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:02:16.589291 | orchestrator | 2026-03-05 01:02:16 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:19.668846 | orchestrator | 2026-03-05 01:02:19 | INFO  | Task dfdea0cb-5efb-4dbc-965b-da02d7a2fb6f is in state STARTED 2026-03-05 01:02:19.668916 | orchestrator | 2026-03-05 01:02:19 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:02:19.670528 | orchestrator | 2026-03-05 01:02:19 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state STARTED 2026-03-05 01:02:19.672412 | orchestrator | 2026-03-05 01:02:19 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:02:19.674069 | orchestrator | 2026-03-05 01:02:19 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:02:19.674115 | orchestrator | 2026-03-05 01:02:19 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:22.710991 | orchestrator | 2026-03-05 01:02:22 | INFO  | Task dfdea0cb-5efb-4dbc-965b-da02d7a2fb6f is in state STARTED 2026-03-05 01:02:22.713981 | orchestrator | 2026-03-05 01:02:22 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:02:22.716219 | orchestrator | 2026-03-05 01:02:22 | INFO  | Task 5a3cefa6-daeb-41e8-abe0-8112cfa2fb0e is in state SUCCESS 2026-03-05 01:02:22.718373 | orchestrator | 2026-03-05 01:02:22.718415 | orchestrator | 2026-03-05 01:02:22.718422 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-05 01:02:22.718430 | orchestrator | 2026-03-05 01:02:22.718436 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-05 01:02:22.718443 | orchestrator | Thursday 05 March 2026 01:01:10 +0000 (0:00:00.320) 0:00:00.320 ******** 2026-03-05 01:02:22.718449 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-05 01:02:22.718457 | orchestrator | 2026-03-05 01:02:22.718463 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-05 01:02:22.718469 | orchestrator | Thursday 05 March 2026 01:01:11 +0000 (0:00:00.253) 0:00:00.573 ******** 2026-03-05 01:02:22.718475 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-05 01:02:22.718482 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-05 01:02:22.718501 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-05 01:02:22.718508 | orchestrator | 2026-03-05 01:02:22.718517 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-05 01:02:22.718530 | orchestrator | Thursday 05 March 2026 01:01:12 +0000 (0:00:01.581) 0:00:02.155 ******** 2026-03-05 01:02:22.718546 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-05 01:02:22.718557 | orchestrator | 2026-03-05 01:02:22.718567 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-05 01:02:22.718577 | orchestrator | Thursday 05 March 2026 01:01:14 +0000 (0:00:01.666) 0:00:03.821 ******** 2026-03-05 01:02:22.718587 | orchestrator | changed: [testbed-manager] 2026-03-05 01:02:22.718596 | orchestrator | 2026-03-05 01:02:22.718606 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-05 01:02:22.718615 | orchestrator | Thursday 05 March 2026 01:01:15 +0000 (0:00:01.043) 0:00:04.865 ******** 2026-03-05 01:02:22.718624 | orchestrator | changed: [testbed-manager] 2026-03-05 01:02:22.718633 | orchestrator | 2026-03-05 01:02:22.718642 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-05 01:02:22.718653 | orchestrator | Thursday 05 March 2026 01:01:16 +0000 (0:00:00.957) 0:00:05.823 ******** 2026-03-05 01:02:22.718662 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-05 01:02:22.718695 | orchestrator | ok: [testbed-manager] 2026-03-05 01:02:22.718707 | orchestrator | 2026-03-05 01:02:22.718716 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-05 01:02:22.718726 | orchestrator | Thursday 05 March 2026 01:01:56 +0000 (0:00:39.620) 0:00:45.443 ******** 2026-03-05 01:02:22.718735 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-05 01:02:22.718745 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-05 01:02:22.718754 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-05 01:02:22.718764 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-05 01:02:22.718773 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-05 01:02:22.718783 | orchestrator | 2026-03-05 01:02:22.718793 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-05 01:02:22.718803 | orchestrator | Thursday 05 March 2026 01:02:00 +0000 (0:00:04.436) 0:00:49.880 ******** 2026-03-05 01:02:22.718813 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-05 01:02:22.718823 | orchestrator | 2026-03-05 01:02:22.718833 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-05 01:02:22.718843 | orchestrator | Thursday 05 March 2026 01:02:00 +0000 (0:00:00.511) 0:00:50.392 ******** 2026-03-05 01:02:22.718853 | orchestrator | skipping: [testbed-manager] 2026-03-05 01:02:22.718863 | orchestrator | 2026-03-05 01:02:22.718873 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-05 01:02:22.718883 | orchestrator | Thursday 05 March 2026 01:02:01 +0000 (0:00:00.206) 0:00:50.598 ******** 2026-03-05 01:02:22.718893 | orchestrator | skipping: [testbed-manager] 2026-03-05 01:02:22.719495 | orchestrator | 2026-03-05 01:02:22.719506 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-05 01:02:22.719513 | orchestrator | Thursday 05 March 2026 01:02:01 +0000 (0:00:00.592) 0:00:51.190 ******** 2026-03-05 01:02:22.719519 | orchestrator | changed: [testbed-manager] 2026-03-05 01:02:22.719524 | orchestrator | 2026-03-05 01:02:22.719531 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-05 01:02:22.719537 | orchestrator | Thursday 05 March 2026 01:02:03 +0000 (0:00:01.514) 0:00:52.705 ******** 2026-03-05 01:02:22.719543 | orchestrator | changed: [testbed-manager] 2026-03-05 01:02:22.719549 | orchestrator | 2026-03-05 01:02:22.719555 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-05 01:02:22.719561 | orchestrator | Thursday 05 March 2026 01:02:04 +0000 (0:00:00.781) 0:00:53.486 ******** 2026-03-05 01:02:22.719566 | orchestrator | changed: [testbed-manager] 2026-03-05 01:02:22.719572 | orchestrator | 2026-03-05 01:02:22.719578 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-05 01:02:22.719584 | orchestrator | Thursday 05 March 2026 01:02:04 +0000 (0:00:00.637) 0:00:54.124 ******** 2026-03-05 01:02:22.719590 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-05 01:02:22.719597 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-05 01:02:22.719603 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-05 01:02:22.719609 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-05 01:02:22.719614 | orchestrator | 2026-03-05 01:02:22.719620 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:02:22.719627 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 01:02:22.719635 | orchestrator | 2026-03-05 01:02:22.719640 | orchestrator | 2026-03-05 01:02:22.719701 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:02:22.719710 | orchestrator | Thursday 05 March 2026 01:02:06 +0000 (0:00:01.760) 0:00:55.884 ******** 2026-03-05 01:02:22.719716 | orchestrator | =============================================================================== 2026-03-05 01:02:22.719721 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 39.62s 2026-03-05 01:02:22.719795 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.44s 2026-03-05 01:02:22.719817 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.76s 2026-03-05 01:02:22.719823 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.67s 2026-03-05 01:02:22.719828 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.58s 2026-03-05 01:02:22.719835 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.51s 2026-03-05 01:02:22.719841 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.04s 2026-03-05 01:02:22.719855 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.96s 2026-03-05 01:02:22.719860 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.78s 2026-03-05 01:02:22.719866 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.64s 2026-03-05 01:02:22.719872 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.59s 2026-03-05 01:02:22.719878 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.51s 2026-03-05 01:02:22.719883 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.25s 2026-03-05 01:02:22.719889 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.21s 2026-03-05 01:02:22.719895 | orchestrator | 2026-03-05 01:02:22.719901 | orchestrator | 2026-03-05 01:02:22.719906 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:02:22.719912 | orchestrator | 2026-03-05 01:02:22.719918 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:02:22.719924 | orchestrator | Thursday 05 March 2026 01:02:12 +0000 (0:00:00.223) 0:00:00.223 ******** 2026-03-05 01:02:22.719929 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:02:22.719935 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:02:22.719941 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:02:22.719947 | orchestrator | 2026-03-05 01:02:22.719953 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:02:22.719958 | orchestrator | Thursday 05 March 2026 01:02:12 +0000 (0:00:00.339) 0:00:00.562 ******** 2026-03-05 01:02:22.719964 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-05 01:02:22.719970 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-05 01:02:22.719976 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-05 01:02:22.719981 | orchestrator | 2026-03-05 01:02:22.719987 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-05 01:02:22.719993 | orchestrator | 2026-03-05 01:02:22.719999 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-05 01:02:22.720004 | orchestrator | Thursday 05 March 2026 01:02:13 +0000 (0:00:00.817) 0:00:01.379 ******** 2026-03-05 01:02:22.720010 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:02:22.720016 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:02:22.720021 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:02:22.720027 | orchestrator | 2026-03-05 01:02:22.720033 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:02:22.720039 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:02:22.720046 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:02:22.720052 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:02:22.720058 | orchestrator | 2026-03-05 01:02:22.720064 | orchestrator | 2026-03-05 01:02:22.720069 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:02:22.720075 | orchestrator | Thursday 05 March 2026 01:02:14 +0000 (0:00:00.863) 0:00:02.243 ******** 2026-03-05 01:02:22.720081 | orchestrator | =============================================================================== 2026-03-05 01:02:22.720091 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.86s 2026-03-05 01:02:22.720097 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2026-03-05 01:02:22.720103 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-03-05 01:02:22.720109 | orchestrator | 2026-03-05 01:02:22.720114 | orchestrator | 2026-03-05 01:02:22.720120 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:02:22.720126 | orchestrator | 2026-03-05 01:02:22.720132 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:02:22.720137 | orchestrator | Thursday 05 March 2026 00:59:18 +0000 (0:00:00.317) 0:00:00.317 ******** 2026-03-05 01:02:22.720163 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:02:22.720170 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:02:22.720176 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:02:22.720181 | orchestrator | 2026-03-05 01:02:22.720187 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:02:22.720193 | orchestrator | Thursday 05 March 2026 00:59:19 +0000 (0:00:00.354) 0:00:00.672 ******** 2026-03-05 01:02:22.720199 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-05 01:02:22.720205 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-05 01:02:22.720211 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-05 01:02:22.720216 | orchestrator | 2026-03-05 01:02:22.720222 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-05 01:02:22.720228 | orchestrator | 2026-03-05 01:02:22.720256 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-05 01:02:22.720263 | orchestrator | Thursday 05 March 2026 00:59:19 +0000 (0:00:00.473) 0:00:01.146 ******** 2026-03-05 01:02:22.720269 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:02:22.720275 | orchestrator | 2026-03-05 01:02:22.720281 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-05 01:02:22.720287 | orchestrator | Thursday 05 March 2026 00:59:20 +0000 (0:00:00.706) 0:00:01.852 ******** 2026-03-05 01:02:22.720302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:02:22.720313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:02:22.720324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:02:22.720332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-05 01:02:22.720359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-05 01:02:22.720371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-05 01:02:22.720378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:02:22.720384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:02:22.720395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:02:22.720401 | orchestrator | 2026-03-05 01:02:22.720407 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-05 01:02:22.720413 | orchestrator | Thursday 05 March 2026 00:59:22 +0000 (0:00:01.965) 0:00:03.818 ******** 2026-03-05 01:02:22.720418 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:02:22.720424 | orchestrator | 2026-03-05 01:02:22.720430 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-05 01:02:22.720436 | orchestrator | Thursday 05 March 2026 00:59:22 +0000 (0:00:00.143) 0:00:03.961 ******** 2026-03-05 01:02:22.720444 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:02:22.720451 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:02:22.720458 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:02:22.720465 | orchestrator | 2026-03-05 01:02:22.720471 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-05 01:02:22.720478 | orchestrator | Thursday 05 March 2026 00:59:22 +0000 (0:00:00.504) 0:00:04.466 ******** 2026-03-05 01:02:22.720485 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 01:02:22.720494 | orchestrator | 2026-03-05 01:02:22.720504 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-05 01:02:22.720515 | orchestrator | Thursday 05 March 2026 00:59:23 +0000 (0:00:00.988) 0:00:05.454 ******** 2026-03-05 01:02:22.720525 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:02:22.720535 | orchestrator | 2026-03-05 01:02:22.720546 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-05 01:02:22.720560 | orchestrator | Thursday 05 March 2026 00:59:24 +0000 (0:00:00.618) 0:00:06.073 ******** 2026-03-05 01:02:22.720575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:02:22.720587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:02:22.720604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:02:22.720616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-05 01:02:22.720636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-05 01:02:22.720652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-05 01:02:22.720662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:02:22.720687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:02:22.720699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:02:22.720710 | orchestrator | 2026-03-05 01:02:22.720722 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-05 01:02:22.720731 | orchestrator | Thursday 05 March 2026 00:59:28 +0000 (0:00:03.640) 0:00:09.713 ******** 2026-03-05 01:02:22.720741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-05 01:02:22.720760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 01:02:22.720776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 01:02:22.720795 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:02:22.720805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-05 01:02:22.720816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 01:02:22.720826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 01:02:22.720837 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:02:22.720853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-05 01:02:22.720868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 01:02:22.720886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 01:02:22.720895 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:02:22.720904 | orchestrator | 2026-03-05 01:02:22.720910 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-05 01:02:22.720916 | orchestrator | Thursday 05 March 2026 00:59:28 +0000 (0:00:00.635) 0:00:10.348 ******** 2026-03-05 01:02:22.720922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-05 01:02:22.720929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 01:02:22.720935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 01:02:22.720941 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:02:22.720956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-05 01:02:22.720967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 01:02:22.720973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 01:02:22.720979 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:02:22.720986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-05 01:02:22.720992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 01:02:22.721001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 01:02:22.721008 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:02:22.721018 | orchestrator | 2026-03-05 01:02:22.721024 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-05 01:02:22.721029 | orchestrator | Thursday 05 March 2026 00:59:29 +0000 (0:00:00.873) 0:00:11.222 ******** 2026-03-05 01:02:22.721039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:02:22.721046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:02:22.721052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:02:22.721062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-05 01:02:22.721068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-05 01:02:22.721082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-05 01:02:22.721088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:02:22.721094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:02:22.721100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:02:22.721106 | orchestrator | 2026-03-05 01:02:22.721112 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-05 01:02:22.721118 | orchestrator | Thursday 05 March 2026 00:59:33 +0000 (0:00:03.507) 0:00:14.729 ******** 2026-03-05 01:02:22.721128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:02:22.721166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:02:22.721176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 01:02:22.721182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 01:02:22.721188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:02:22.721195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 01:02:22.721210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:02:22.721240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:02:22.721247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:02:22.721253 | orchestrator | 2026-03-05 01:02:22.721259 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-05 01:02:22.721265 | orchestrator | Thursday 05 March 2026 00:59:39 +0000 (0:00:06.631) 0:00:21.361 ******** 2026-03-05 01:02:22.721271 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:02:22.721277 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:02:22.721283 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:02:22.721289 | orchestrator | 2026-03-05 01:02:22.721294 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-05 01:02:22.721301 | orchestrator | Thursday 05 March 2026 00:59:41 +0000 (0:00:01.716) 0:00:23.077 ******** 2026-03-05 01:02:22.721306 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:02:22.721312 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:02:22.721318 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:02:22.721324 | orchestrator | 2026-03-05 01:02:22.721329 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-05 01:02:22.721335 | orchestrator | Thursday 05 March 2026 00:59:42 +0000 (0:00:00.634) 0:00:23.711 ******** 2026-03-05 01:02:22.721341 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:02:22.721346 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:02:22.721352 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:02:22.721358 | orchestrator | 2026-03-05 01:02:22.721364 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-05 01:02:22.721369 | orchestrator | Thursday 05 March 2026 00:59:42 +0000 (0:00:00.351) 0:00:24.063 ******** 2026-03-05 01:02:22.721375 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:02:22.721381 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:02:22.721386 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:02:22.721392 | orchestrator | 2026-03-05 01:02:22.721398 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-05 01:02:22.721410 | orchestrator | Thursday 05 March 2026 00:59:43 +0000 (0:00:00.564) 0:00:24.627 ******** 2026-03-05 01:02:22.721417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-05 01:02:22.721427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 01:02:22.721438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 01:02:22.721444 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:02:22.721450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-05 01:02:22.721456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 01:02:22.721466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 01:02:22.721472 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:02:22.721482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-05 01:02:22.721492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 01:02:22.721498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 01:02:22.721504 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:02:22.721510 | orchestrator | 2026-03-05 01:02:22.721516 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-05 01:02:22.721521 | orchestrator | Thursday 05 March 2026 00:59:43 +0000 (0:00:00.649) 0:00:25.276 ******** 2026-03-05 01:02:22.721527 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:02:22.721533 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:02:22.721539 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:02:22.721545 | orchestrator | 2026-03-05 01:02:22.721551 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-05 01:02:22.721556 | orchestrator | Thursday 05 March 2026 00:59:43 +0000 (0:00:00.322) 0:00:25.598 ******** 2026-03-05 01:02:22.721562 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-05 01:02:22.721569 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-05 01:02:22.721579 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-05 01:02:22.721585 | orchestrator | 2026-03-05 01:02:22.721590 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-05 01:02:22.721596 | orchestrator | Thursday 05 March 2026 00:59:45 +0000 (0:00:01.830) 0:00:27.429 ******** 2026-03-05 01:02:22.721602 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 01:02:22.721608 | orchestrator | 2026-03-05 01:02:22.721614 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-05 01:02:22.721623 | orchestrator | Thursday 05 March 2026 00:59:46 +0000 (0:00:01.111) 0:00:28.540 ******** 2026-03-05 01:02:22.721633 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:02:22.721642 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:02:22.721651 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:02:22.721660 | orchestrator | 2026-03-05 01:02:22.721675 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-05 01:02:22.721686 | orchestrator | Thursday 05 March 2026 00:59:47 +0000 (0:00:00.988) 0:00:29.529 ******** 2026-03-05 01:02:22.721695 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 01:02:22.721704 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-05 01:02:22.721713 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-05 01:02:22.721722 | orchestrator | 2026-03-05 01:02:22.721731 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-05 01:02:22.721741 | orchestrator | Thursday 05 March 2026 00:59:49 +0000 (0:00:01.328) 0:00:30.858 ******** 2026-03-05 01:02:22.721750 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:02:22.721758 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:02:22.721768 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:02:22.721777 | orchestrator | 2026-03-05 01:02:22.721786 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-05 01:02:22.721795 | orchestrator | Thursday 05 March 2026 00:59:49 +0000 (0:00:00.395) 0:00:31.253 ******** 2026-03-05 01:02:22.721805 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-05 01:02:22.721815 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-05 01:02:22.721824 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-05 01:02:22.721834 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-05 01:02:22.721844 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-05 01:02:22.721859 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-05 01:02:22.721866 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-05 01:02:22.721872 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-05 01:02:22.721878 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-05 01:02:22.721884 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-05 01:02:22.721889 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-05 01:02:22.721895 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-05 01:02:22.721905 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-05 01:02:22.721911 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-05 01:02:22.721917 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-05 01:02:22.721928 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-05 01:02:22.721934 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-05 01:02:22.721940 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-05 01:02:22.721946 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-05 01:02:22.721952 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-05 01:02:22.721958 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-05 01:02:22.721963 | orchestrator | 2026-03-05 01:02:22.721969 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-05 01:02:22.721975 | orchestrator | Thursday 05 March 2026 00:59:59 +0000 (0:00:09.833) 0:00:41.087 ******** 2026-03-05 01:02:22.721981 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-05 01:02:22.721987 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-05 01:02:22.721992 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-05 01:02:22.721998 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-05 01:02:22.722004 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-05 01:02:22.722010 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-05 01:02:22.722178 | orchestrator | 2026-03-05 01:02:22.722190 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-05 01:02:22.722200 | orchestrator | Thursday 05 March 2026 01:00:02 +0000 (0:00:03.099) 0:00:44.186 ******** 2026-03-05 01:02:22.722212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:02:22.722232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:02:22.722259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:02:22.722270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-05 01:02:22.722282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-05 01:02:22.722289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-05 01:02:22.722295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:02:22.722307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:02:22.722322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:02:22.722328 | orchestrator | 2026-03-05 01:02:22.722334 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-05 01:02:22.722340 | orchestrator | Thursday 05 March 2026 01:00:05 +0000 (0:00:02.612) 0:00:46.798 ******** 2026-03-05 01:02:22.722346 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:02:22.722351 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:02:22.722357 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:02:22.722363 | orchestrator | 2026-03-05 01:02:22.722369 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-05 01:02:22.722375 | orchestrator | Thursday 05 March 2026 01:00:05 +0000 (0:00:00.455) 0:00:47.254 ******** 2026-03-05 01:02:22.722429 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:02:22.722441 | orchestrator | 2026-03-05 01:02:22.722449 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-05 01:02:22.722456 | orchestrator | Thursday 05 March 2026 01:00:08 +0000 (0:00:02.505) 0:00:49.760 ******** 2026-03-05 01:02:22.722462 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:02:22.722467 | orchestrator | 2026-03-05 01:02:22.722473 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-05 01:02:22.722479 | orchestrator | Thursday 05 March 2026 01:00:10 +0000 (0:00:02.450) 0:00:52.210 ******** 2026-03-05 01:02:22.722485 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:02:22.722490 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:02:22.722496 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:02:22.722502 | orchestrator | 2026-03-05 01:02:22.722508 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-05 01:02:22.722514 | orchestrator | Thursday 05 March 2026 01:00:11 +0000 (0:00:01.101) 0:00:53.311 ******** 2026-03-05 01:02:22.722519 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:02:22.722525 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:02:22.722531 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:02:22.722536 | orchestrator | 2026-03-05 01:02:22.722543 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-05 01:02:22.722549 | orchestrator | Thursday 05 March 2026 01:00:12 +0000 (0:00:00.408) 0:00:53.720 ******** 2026-03-05 01:02:22.722554 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:02:22.722560 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:02:22.722566 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:02:22.722572 | orchestrator | 2026-03-05 01:02:22.722578 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-05 01:02:22.722583 | orchestrator | Thursday 05 March 2026 01:00:12 +0000 (0:00:00.462) 0:00:54.183 ******** 2026-03-05 01:02:22.722589 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:02:22.722595 | orchestrator | 2026-03-05 01:02:22.722601 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-05 01:02:22.722607 | orchestrator | Thursday 05 March 2026 01:00:29 +0000 (0:00:16.848) 0:01:11.031 ******** 2026-03-05 01:02:22.722613 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:02:22.722618 | orchestrator | 2026-03-05 01:02:22.722646 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-05 01:02:22.722658 | orchestrator | Thursday 05 March 2026 01:00:41 +0000 (0:00:11.671) 0:01:22.703 ******** 2026-03-05 01:02:22.722665 | orchestrator | 2026-03-05 01:02:22.722671 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-05 01:02:22.722682 | orchestrator | Thursday 05 March 2026 01:00:41 +0000 (0:00:00.084) 0:01:22.787 ******** 2026-03-05 01:02:22.722688 | orchestrator | 2026-03-05 01:02:22.722694 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-05 01:02:22.722700 | orchestrator | Thursday 05 March 2026 01:00:41 +0000 (0:00:00.071) 0:01:22.859 ******** 2026-03-05 01:02:22.722705 | orchestrator | 2026-03-05 01:02:22.722711 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-05 01:02:22.722717 | orchestrator | Thursday 05 March 2026 01:00:41 +0000 (0:00:00.087) 0:01:22.947 ******** 2026-03-05 01:02:22.722722 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:02:22.722728 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:02:22.722734 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:02:22.722740 | orchestrator | 2026-03-05 01:02:22.722746 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-05 01:02:22.722752 | orchestrator | Thursday 05 March 2026 01:01:10 +0000 (0:00:28.699) 0:01:51.647 ******** 2026-03-05 01:02:22.722758 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:02:22.722764 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:02:22.722769 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:02:22.722775 | orchestrator | 2026-03-05 01:02:22.722781 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-05 01:02:22.722787 | orchestrator | Thursday 05 March 2026 01:01:15 +0000 (0:00:05.371) 0:01:57.019 ******** 2026-03-05 01:02:22.722797 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:02:22.722803 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:02:22.722833 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:02:22.722842 | orchestrator | 2026-03-05 01:02:22.722848 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-05 01:02:22.722854 | orchestrator | Thursday 05 March 2026 01:01:27 +0000 (0:00:11.858) 0:02:08.878 ******** 2026-03-05 01:02:22.722860 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:02:22.722865 | orchestrator | 2026-03-05 01:02:22.722871 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-05 01:02:22.722877 | orchestrator | Thursday 05 March 2026 01:01:28 +0000 (0:00:00.851) 0:02:09.729 ******** 2026-03-05 01:02:22.722883 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:02:22.722889 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:02:22.722895 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:02:22.722900 | orchestrator | 2026-03-05 01:02:22.722906 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-05 01:02:22.722917 | orchestrator | Thursday 05 March 2026 01:01:28 +0000 (0:00:00.830) 0:02:10.560 ******** 2026-03-05 01:02:22.722923 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:02:22.722929 | orchestrator | 2026-03-05 01:02:22.722935 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-05 01:02:22.722940 | orchestrator | Thursday 05 March 2026 01:01:30 +0000 (0:00:01.870) 0:02:12.430 ******** 2026-03-05 01:02:22.722946 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-05 01:02:22.722952 | orchestrator | 2026-03-05 01:02:22.722958 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-05 01:02:22.722964 | orchestrator | Thursday 05 March 2026 01:01:43 +0000 (0:00:12.223) 0:02:24.654 ******** 2026-03-05 01:02:22.722970 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-05 01:02:22.722975 | orchestrator | 2026-03-05 01:02:22.722981 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-05 01:02:22.722987 | orchestrator | Thursday 05 March 2026 01:02:06 +0000 (0:00:23.225) 0:02:47.880 ******** 2026-03-05 01:02:22.722993 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-05 01:02:22.722999 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-05 01:02:22.723014 | orchestrator | 2026-03-05 01:02:22.723023 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-05 01:02:22.723032 | orchestrator | Thursday 05 March 2026 01:02:13 +0000 (0:00:06.888) 0:02:54.768 ******** 2026-03-05 01:02:22.723042 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:02:22.723048 | orchestrator | 2026-03-05 01:02:22.723054 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-05 01:02:22.723060 | orchestrator | Thursday 05 March 2026 01:02:13 +0000 (0:00:00.144) 0:02:54.913 ******** 2026-03-05 01:02:22.723066 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:02:22.723072 | orchestrator | 2026-03-05 01:02:22.723077 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-05 01:02:22.723083 | orchestrator | Thursday 05 March 2026 01:02:13 +0000 (0:00:00.185) 0:02:55.098 ******** 2026-03-05 01:02:22.723089 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:02:22.723095 | orchestrator | 2026-03-05 01:02:22.723105 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-05 01:02:22.723111 | orchestrator | Thursday 05 March 2026 01:02:13 +0000 (0:00:00.224) 0:02:55.323 ******** 2026-03-05 01:02:22.723117 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:02:22.723123 | orchestrator | 2026-03-05 01:02:22.723129 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-05 01:02:22.723135 | orchestrator | Thursday 05 March 2026 01:02:14 +0000 (0:00:00.653) 0:02:55.976 ******** 2026-03-05 01:02:22.723161 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:02:22.723167 | orchestrator | 2026-03-05 01:02:22.723173 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-05 01:02:22.723179 | orchestrator | Thursday 05 March 2026 01:02:18 +0000 (0:00:03.709) 0:02:59.685 ******** 2026-03-05 01:02:22.723185 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:02:22.723191 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:02:22.723196 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:02:22.723213 | orchestrator | 2026-03-05 01:02:22.723219 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:02:22.723226 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-05 01:02:22.723241 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-05 01:02:22.723248 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-05 01:02:22.723253 | orchestrator | 2026-03-05 01:02:22.723259 | orchestrator | 2026-03-05 01:02:22.723265 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:02:22.723271 | orchestrator | Thursday 05 March 2026 01:02:19 +0000 (0:00:01.279) 0:03:00.965 ******** 2026-03-05 01:02:22.723277 | orchestrator | =============================================================================== 2026-03-05 01:02:22.723282 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 28.70s 2026-03-05 01:02:22.723288 | orchestrator | service-ks-register : keystone | Creating services --------------------- 23.23s 2026-03-05 01:02:22.723294 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 16.85s 2026-03-05 01:02:22.723300 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.22s 2026-03-05 01:02:22.723311 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.86s 2026-03-05 01:02:22.723317 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.67s 2026-03-05 01:02:22.723323 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.83s 2026-03-05 01:02:22.723328 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.89s 2026-03-05 01:02:22.723334 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.63s 2026-03-05 01:02:22.723347 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.37s 2026-03-05 01:02:22.723353 | orchestrator | keystone : Creating default user role ----------------------------------- 3.71s 2026-03-05 01:02:22.723359 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.64s 2026-03-05 01:02:22.723365 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.51s 2026-03-05 01:02:22.723377 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.10s 2026-03-05 01:02:22.723383 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.61s 2026-03-05 01:02:22.723390 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.51s 2026-03-05 01:02:22.723395 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.45s 2026-03-05 01:02:22.723401 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.97s 2026-03-05 01:02:22.723407 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.87s 2026-03-05 01:02:22.723413 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.83s 2026-03-05 01:02:22.723418 | orchestrator | 2026-03-05 01:02:22 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:02:22.723424 | orchestrator | 2026-03-05 01:02:22 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:02:22.723430 | orchestrator | 2026-03-05 01:02:22 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:02:22.723436 | orchestrator | 2026-03-05 01:02:22 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:25.768411 | orchestrator | 2026-03-05 01:02:25 | INFO  | Task dfdea0cb-5efb-4dbc-965b-da02d7a2fb6f is in state STARTED 2026-03-05 01:02:25.769542 | orchestrator | 2026-03-05 01:02:25 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:02:25.770341 | orchestrator | 2026-03-05 01:02:25 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:02:25.771476 | orchestrator | 2026-03-05 01:02:25 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:02:25.773999 | orchestrator | 2026-03-05 01:02:25 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:02:25.774106 | orchestrator | 2026-03-05 01:02:25 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:28.826369 | orchestrator | 2026-03-05 01:02:28 | INFO  | Task dfdea0cb-5efb-4dbc-965b-da02d7a2fb6f is in state STARTED 2026-03-05 01:02:28.827889 | orchestrator | 2026-03-05 01:02:28 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:02:28.830889 | orchestrator | 2026-03-05 01:02:28 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:02:28.832638 | orchestrator | 2026-03-05 01:02:28 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:02:28.833586 | orchestrator | 2026-03-05 01:02:28 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:02:28.833629 | orchestrator | 2026-03-05 01:02:28 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:31.868581 | orchestrator | 2026-03-05 01:02:31 | INFO  | Task dfdea0cb-5efb-4dbc-965b-da02d7a2fb6f is in state STARTED 2026-03-05 01:02:31.872393 | orchestrator | 2026-03-05 01:02:31 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:02:31.873876 | orchestrator | 2026-03-05 01:02:31 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:02:31.874554 | orchestrator | 2026-03-05 01:02:31 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:02:31.877242 | orchestrator | 2026-03-05 01:02:31 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:02:31.877285 | orchestrator | 2026-03-05 01:02:31 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:34.916872 | orchestrator | 2026-03-05 01:02:34 | INFO  | Task dfdea0cb-5efb-4dbc-965b-da02d7a2fb6f is in state STARTED 2026-03-05 01:02:34.918213 | orchestrator | 2026-03-05 01:02:34 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:02:34.919461 | orchestrator | 2026-03-05 01:02:34 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:02:34.920429 | orchestrator | 2026-03-05 01:02:34 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:02:34.921580 | orchestrator | 2026-03-05 01:02:34 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:02:34.921639 | orchestrator | 2026-03-05 01:02:34 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:37.976300 | orchestrator | 2026-03-05 01:02:37 | INFO  | Task dfdea0cb-5efb-4dbc-965b-da02d7a2fb6f is in state STARTED 2026-03-05 01:02:37.977757 | orchestrator | 2026-03-05 01:02:37 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:02:37.981325 | orchestrator | 2026-03-05 01:02:37 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:02:37.981456 | orchestrator | 2026-03-05 01:02:37 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:02:37.984081 | orchestrator | 2026-03-05 01:02:37 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:02:37.984157 | orchestrator | 2026-03-05 01:02:37 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:41.024400 | orchestrator | 2026-03-05 01:02:41 | INFO  | Task dfdea0cb-5efb-4dbc-965b-da02d7a2fb6f is in state STARTED 2026-03-05 01:02:41.025425 | orchestrator | 2026-03-05 01:02:41 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:02:41.026282 | orchestrator | 2026-03-05 01:02:41 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:02:41.027352 | orchestrator | 2026-03-05 01:02:41 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:02:41.029553 | orchestrator | 2026-03-05 01:02:41 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:02:41.029602 | orchestrator | 2026-03-05 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:44.072628 | orchestrator | 2026-03-05 01:02:44 | INFO  | Task dfdea0cb-5efb-4dbc-965b-da02d7a2fb6f is in state STARTED 2026-03-05 01:02:44.075346 | orchestrator | 2026-03-05 01:02:44 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:02:44.076659 | orchestrator | 2026-03-05 01:02:44 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:02:44.077924 | orchestrator | 2026-03-05 01:02:44 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:02:44.079897 | orchestrator | 2026-03-05 01:02:44 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:02:44.080048 | orchestrator | 2026-03-05 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:47.126718 | orchestrator | 2026-03-05 01:02:47 | INFO  | Task dfdea0cb-5efb-4dbc-965b-da02d7a2fb6f is in state STARTED 2026-03-05 01:02:47.128827 | orchestrator | 2026-03-05 01:02:47 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:02:47.131448 | orchestrator | 2026-03-05 01:02:47 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:02:47.134602 | orchestrator | 2026-03-05 01:02:47 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:02:47.136091 | orchestrator | 2026-03-05 01:02:47 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:02:47.136136 | orchestrator | 2026-03-05 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:50.233295 | orchestrator | 2026-03-05 01:02:50 | INFO  | Task dfdea0cb-5efb-4dbc-965b-da02d7a2fb6f is in state STARTED 2026-03-05 01:02:50.234889 | orchestrator | 2026-03-05 01:02:50 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:02:50.237157 | orchestrator | 2026-03-05 01:02:50 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:02:50.239185 | orchestrator | 2026-03-05 01:02:50 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:02:50.242496 | orchestrator | 2026-03-05 01:02:50 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:02:50.243038 | orchestrator | 2026-03-05 01:02:50 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:53.292857 | orchestrator | 2026-03-05 01:02:53 | INFO  | Task dfdea0cb-5efb-4dbc-965b-da02d7a2fb6f is in state STARTED 2026-03-05 01:02:53.293986 | orchestrator | 2026-03-05 01:02:53 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:02:53.294763 | orchestrator | 2026-03-05 01:02:53 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:02:53.295996 | orchestrator | 2026-03-05 01:02:53 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:02:53.297458 | orchestrator | 2026-03-05 01:02:53 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:02:53.297506 | orchestrator | 2026-03-05 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:56.766683 | orchestrator | 2026-03-05 01:02:56 | INFO  | Task dfdea0cb-5efb-4dbc-965b-da02d7a2fb6f is in state STARTED 2026-03-05 01:02:56.766772 | orchestrator | 2026-03-05 01:02:56 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:02:56.766808 | orchestrator | 2026-03-05 01:02:56 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:02:56.766822 | orchestrator | 2026-03-05 01:02:56 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:02:56.766836 | orchestrator | 2026-03-05 01:02:56 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:02:56.766850 | orchestrator | 2026-03-05 01:02:56 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:00.260036 | orchestrator | 2026-03-05 01:02:59 | INFO  | Task dfdea0cb-5efb-4dbc-965b-da02d7a2fb6f is in state STARTED 2026-03-05 01:03:00.260118 | orchestrator | 2026-03-05 01:02:59 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:03:00.260126 | orchestrator | 2026-03-05 01:02:59 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:03:00.260132 | orchestrator | 2026-03-05 01:02:59 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:03:00.260159 | orchestrator | 2026-03-05 01:02:59 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:03:00.260166 | orchestrator | 2026-03-05 01:02:59 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:02.789116 | orchestrator | 2026-03-05 01:03:02 | INFO  | Task dfdea0cb-5efb-4dbc-965b-da02d7a2fb6f is in state STARTED 2026-03-05 01:03:02.790266 | orchestrator | 2026-03-05 01:03:02 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:03:02.790820 | orchestrator | 2026-03-05 01:03:02 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:03:02.791760 | orchestrator | 2026-03-05 01:03:02 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:03:02.792711 | orchestrator | 2026-03-05 01:03:02 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:03:02.792782 | orchestrator | 2026-03-05 01:03:02 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:05.871023 | orchestrator | 2026-03-05 01:03:05 | INFO  | Task dfdea0cb-5efb-4dbc-965b-da02d7a2fb6f is in state STARTED 2026-03-05 01:03:05.871121 | orchestrator | 2026-03-05 01:03:05 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:03:05.871131 | orchestrator | 2026-03-05 01:03:05 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:03:05.871206 | orchestrator | 2026-03-05 01:03:05 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:03:05.871217 | orchestrator | 2026-03-05 01:03:05 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:03:05.871229 | orchestrator | 2026-03-05 01:03:05 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:08.886565 | orchestrator | 2026-03-05 01:03:08 | INFO  | Task dfdea0cb-5efb-4dbc-965b-da02d7a2fb6f is in state STARTED 2026-03-05 01:03:08.889799 | orchestrator | 2026-03-05 01:03:08 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:03:08.891075 | orchestrator | 2026-03-05 01:03:08 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:03:08.893566 | orchestrator | 2026-03-05 01:03:08 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:03:08.895131 | orchestrator | 2026-03-05 01:03:08 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:03:08.895191 | orchestrator | 2026-03-05 01:03:08 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:11.931110 | orchestrator | 2026-03-05 01:03:11 | INFO  | Task dfdea0cb-5efb-4dbc-965b-da02d7a2fb6f is in state STARTED 2026-03-05 01:03:11.932255 | orchestrator | 2026-03-05 01:03:11 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:03:11.933259 | orchestrator | 2026-03-05 01:03:11 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:03:11.934438 | orchestrator | 2026-03-05 01:03:11 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:03:11.935259 | orchestrator | 2026-03-05 01:03:11 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:03:11.935337 | orchestrator | 2026-03-05 01:03:11 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:14.988736 | orchestrator | 2026-03-05 01:03:14 | INFO  | Task dfdea0cb-5efb-4dbc-965b-da02d7a2fb6f is in state SUCCESS 2026-03-05 01:03:14.989489 | orchestrator | 2026-03-05 01:03:14 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:03:14.991807 | orchestrator | 2026-03-05 01:03:14 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:03:14.994770 | orchestrator | 2026-03-05 01:03:14 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:03:14.996050 | orchestrator | 2026-03-05 01:03:14 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:03:14.998532 | orchestrator | 2026-03-05 01:03:14 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:03:14.998594 | orchestrator | 2026-03-05 01:03:14 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:18.047194 | orchestrator | 2026-03-05 01:03:18 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:03:18.047291 | orchestrator | 2026-03-05 01:03:18 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:03:18.048464 | orchestrator | 2026-03-05 01:03:18 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:03:18.050539 | orchestrator | 2026-03-05 01:03:18 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:03:18.052178 | orchestrator | 2026-03-05 01:03:18 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:03:18.052226 | orchestrator | 2026-03-05 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:21.092781 | orchestrator | 2026-03-05 01:03:21 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:03:21.094413 | orchestrator | 2026-03-05 01:03:21 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:03:21.096395 | orchestrator | 2026-03-05 01:03:21 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:03:21.097906 | orchestrator | 2026-03-05 01:03:21 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:03:21.099808 | orchestrator | 2026-03-05 01:03:21 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:03:21.099865 | orchestrator | 2026-03-05 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:24.235093 | orchestrator | 2026-03-05 01:03:24 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:03:24.235242 | orchestrator | 2026-03-05 01:03:24 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:03:24.235255 | orchestrator | 2026-03-05 01:03:24 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:03:24.235263 | orchestrator | 2026-03-05 01:03:24 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:03:24.235270 | orchestrator | 2026-03-05 01:03:24 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:03:24.235278 | orchestrator | 2026-03-05 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:27.200759 | orchestrator | 2026-03-05 01:03:27 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:03:27.200853 | orchestrator | 2026-03-05 01:03:27 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:03:27.201424 | orchestrator | 2026-03-05 01:03:27 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:03:27.203397 | orchestrator | 2026-03-05 01:03:27 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:03:27.205511 | orchestrator | 2026-03-05 01:03:27 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:03:27.205559 | orchestrator | 2026-03-05 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:30.250300 | orchestrator | 2026-03-05 01:03:30 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:03:30.253292 | orchestrator | 2026-03-05 01:03:30 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:03:30.258922 | orchestrator | 2026-03-05 01:03:30 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:03:30.261297 | orchestrator | 2026-03-05 01:03:30 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:03:30.265924 | orchestrator | 2026-03-05 01:03:30 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:03:30.266067 | orchestrator | 2026-03-05 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:33.321682 | orchestrator | 2026-03-05 01:03:33 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:03:33.322489 | orchestrator | 2026-03-05 01:03:33 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:03:33.325098 | orchestrator | 2026-03-05 01:03:33 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:03:33.326884 | orchestrator | 2026-03-05 01:03:33 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:03:33.328097 | orchestrator | 2026-03-05 01:03:33 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state STARTED 2026-03-05 01:03:33.328564 | orchestrator | 2026-03-05 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:36.377538 | orchestrator | 2026-03-05 01:03:36 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:03:36.386970 | orchestrator | 2026-03-05 01:03:36 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:03:36.394471 | orchestrator | 2026-03-05 01:03:36 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:03:36.395551 | orchestrator | 2026-03-05 01:03:36 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:03:36.396330 | orchestrator | 2026-03-05 01:03:36 | INFO  | Task 16d4b45c-4b97-49a0-8d20-b92fdb2e517b is in state SUCCESS 2026-03-05 01:03:36.396656 | orchestrator | 2026-03-05 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:36.396928 | orchestrator | 2026-03-05 01:03:36.396949 | orchestrator | 2026-03-05 01:03:36.396957 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:03:36.396966 | orchestrator | 2026-03-05 01:03:36.396974 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:03:36.396983 | orchestrator | Thursday 05 March 2026 01:02:22 +0000 (0:00:00.465) 0:00:00.465 ******** 2026-03-05 01:03:36.396991 | orchestrator | ok: [testbed-manager] 2026-03-05 01:03:36.396999 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:03:36.397008 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:03:36.397017 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:03:36.397025 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:03:36.397033 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:03:36.397041 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:03:36.397049 | orchestrator | 2026-03-05 01:03:36.397057 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:03:36.397065 | orchestrator | Thursday 05 March 2026 01:02:24 +0000 (0:00:01.325) 0:00:01.791 ******** 2026-03-05 01:03:36.397073 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-05 01:03:36.397081 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-05 01:03:36.397090 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-05 01:03:36.397097 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-05 01:03:36.397106 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-05 01:03:36.397120 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-05 01:03:36.397156 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-05 01:03:36.397203 | orchestrator | 2026-03-05 01:03:36.397221 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-05 01:03:36.397236 | orchestrator | 2026-03-05 01:03:36.397249 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-05 01:03:36.397263 | orchestrator | Thursday 05 March 2026 01:02:27 +0000 (0:00:02.839) 0:00:04.631 ******** 2026-03-05 01:03:36.397278 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:03:36.397293 | orchestrator | 2026-03-05 01:03:36.397306 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-05 01:03:36.397319 | orchestrator | Thursday 05 March 2026 01:02:29 +0000 (0:00:02.571) 0:00:07.203 ******** 2026-03-05 01:03:36.397333 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-05 01:03:36.397348 | orchestrator | 2026-03-05 01:03:36.397363 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-05 01:03:36.397377 | orchestrator | Thursday 05 March 2026 01:02:33 +0000 (0:00:04.252) 0:00:11.455 ******** 2026-03-05 01:03:36.397392 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-05 01:03:36.397409 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-05 01:03:36.397422 | orchestrator | 2026-03-05 01:03:36.397436 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-05 01:03:36.397450 | orchestrator | Thursday 05 March 2026 01:02:42 +0000 (0:00:08.676) 0:00:20.131 ******** 2026-03-05 01:03:36.397464 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-05 01:03:36.397479 | orchestrator | 2026-03-05 01:03:36.397487 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-05 01:03:36.397495 | orchestrator | Thursday 05 March 2026 01:02:46 +0000 (0:00:03.880) 0:00:24.012 ******** 2026-03-05 01:03:36.397503 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-05 01:03:36.397511 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-05 01:03:36.397519 | orchestrator | 2026-03-05 01:03:36.397542 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-05 01:03:36.397550 | orchestrator | Thursday 05 March 2026 01:02:52 +0000 (0:00:05.950) 0:00:29.963 ******** 2026-03-05 01:03:36.397561 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-05 01:03:36.397571 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-05 01:03:36.397580 | orchestrator | 2026-03-05 01:03:36.397589 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-05 01:03:36.397599 | orchestrator | Thursday 05 March 2026 01:03:05 +0000 (0:00:12.898) 0:00:42.862 ******** 2026-03-05 01:03:36.397608 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-05 01:03:36.397617 | orchestrator | 2026-03-05 01:03:36.397627 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:03:36.397636 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:03:36.397646 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:03:36.397656 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:03:36.397666 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:03:36.397675 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:03:36.397706 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:03:36.397717 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:03:36.397726 | orchestrator | 2026-03-05 01:03:36.397734 | orchestrator | 2026-03-05 01:03:36.397742 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:03:36.397750 | orchestrator | Thursday 05 March 2026 01:03:12 +0000 (0:00:07.717) 0:00:50.579 ******** 2026-03-05 01:03:36.397758 | orchestrator | =============================================================================== 2026-03-05 01:03:36.397766 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------ 12.90s 2026-03-05 01:03:36.397773 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 8.68s 2026-03-05 01:03:36.397781 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 7.72s 2026-03-05 01:03:36.397789 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 5.95s 2026-03-05 01:03:36.397797 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.25s 2026-03-05 01:03:36.397804 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.88s 2026-03-05 01:03:36.397812 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.84s 2026-03-05 01:03:36.397820 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.57s 2026-03-05 01:03:36.397828 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.33s 2026-03-05 01:03:36.397835 | orchestrator | 2026-03-05 01:03:36.397843 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-05 01:03:36.397851 | orchestrator | 2.16.14 2026-03-05 01:03:36.397859 | orchestrator | 2026-03-05 01:03:36.397867 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-05 01:03:36.397875 | orchestrator | 2026-03-05 01:03:36.397883 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-05 01:03:36.397891 | orchestrator | Thursday 05 March 2026 01:02:12 +0000 (0:00:00.316) 0:00:00.316 ******** 2026-03-05 01:03:36.397899 | orchestrator | changed: [testbed-manager] 2026-03-05 01:03:36.397907 | orchestrator | 2026-03-05 01:03:36.397915 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-05 01:03:36.397922 | orchestrator | Thursday 05 March 2026 01:02:13 +0000 (0:00:01.570) 0:00:01.887 ******** 2026-03-05 01:03:36.397930 | orchestrator | changed: [testbed-manager] 2026-03-05 01:03:36.397938 | orchestrator | 2026-03-05 01:03:36.397946 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-05 01:03:36.397954 | orchestrator | Thursday 05 March 2026 01:02:14 +0000 (0:00:01.179) 0:00:03.066 ******** 2026-03-05 01:03:36.397962 | orchestrator | changed: [testbed-manager] 2026-03-05 01:03:36.397970 | orchestrator | 2026-03-05 01:03:36.397978 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-05 01:03:36.397986 | orchestrator | Thursday 05 March 2026 01:02:16 +0000 (0:00:01.277) 0:00:04.343 ******** 2026-03-05 01:03:36.397993 | orchestrator | changed: [testbed-manager] 2026-03-05 01:03:36.398001 | orchestrator | 2026-03-05 01:03:36.398009 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-05 01:03:36.398061 | orchestrator | Thursday 05 March 2026 01:02:17 +0000 (0:00:01.406) 0:00:05.750 ******** 2026-03-05 01:03:36.398072 | orchestrator | changed: [testbed-manager] 2026-03-05 01:03:36.398080 | orchestrator | 2026-03-05 01:03:36.398088 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-05 01:03:36.398096 | orchestrator | Thursday 05 March 2026 01:02:18 +0000 (0:00:01.413) 0:00:07.163 ******** 2026-03-05 01:03:36.398103 | orchestrator | changed: [testbed-manager] 2026-03-05 01:03:36.398111 | orchestrator | 2026-03-05 01:03:36.398119 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-05 01:03:36.398127 | orchestrator | Thursday 05 March 2026 01:02:20 +0000 (0:00:01.602) 0:00:08.766 ******** 2026-03-05 01:03:36.398163 | orchestrator | changed: [testbed-manager] 2026-03-05 01:03:36.398171 | orchestrator | 2026-03-05 01:03:36.398179 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-05 01:03:36.398187 | orchestrator | Thursday 05 March 2026 01:02:22 +0000 (0:00:02.023) 0:00:10.790 ******** 2026-03-05 01:03:36.398195 | orchestrator | changed: [testbed-manager] 2026-03-05 01:03:36.398202 | orchestrator | 2026-03-05 01:03:36.398210 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-05 01:03:36.398218 | orchestrator | Thursday 05 March 2026 01:02:24 +0000 (0:00:01.547) 0:00:12.337 ******** 2026-03-05 01:03:36.398226 | orchestrator | changed: [testbed-manager] 2026-03-05 01:03:36.398234 | orchestrator | 2026-03-05 01:03:36.398242 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-05 01:03:36.398250 | orchestrator | Thursday 05 March 2026 01:03:09 +0000 (0:00:45.163) 0:00:57.501 ******** 2026-03-05 01:03:36.398258 | orchestrator | skipping: [testbed-manager] 2026-03-05 01:03:36.398265 | orchestrator | 2026-03-05 01:03:36.398273 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-05 01:03:36.398281 | orchestrator | 2026-03-05 01:03:36.398289 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-05 01:03:36.398297 | orchestrator | Thursday 05 March 2026 01:03:09 +0000 (0:00:00.206) 0:00:57.708 ******** 2026-03-05 01:03:36.398305 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:03:36.398312 | orchestrator | 2026-03-05 01:03:36.398320 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-05 01:03:36.398328 | orchestrator | 2026-03-05 01:03:36.398336 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-05 01:03:36.398344 | orchestrator | Thursday 05 March 2026 01:03:21 +0000 (0:00:11.976) 0:01:09.684 ******** 2026-03-05 01:03:36.398352 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:03:36.398359 | orchestrator | 2026-03-05 01:03:36.398367 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-05 01:03:36.398375 | orchestrator | 2026-03-05 01:03:36.398383 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-05 01:03:36.398397 | orchestrator | Thursday 05 March 2026 01:03:32 +0000 (0:00:11.389) 0:01:21.073 ******** 2026-03-05 01:03:36.398406 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:03:36.398414 | orchestrator | 2026-03-05 01:03:36.398422 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:03:36.398430 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 01:03:36.398438 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:03:36.398446 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:03:36.398454 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:03:36.398462 | orchestrator | 2026-03-05 01:03:36.398470 | orchestrator | 2026-03-05 01:03:36.398478 | orchestrator | 2026-03-05 01:03:36.398486 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:03:36.398494 | orchestrator | Thursday 05 March 2026 01:03:34 +0000 (0:00:01.245) 0:01:22.319 ******** 2026-03-05 01:03:36.398502 | orchestrator | =============================================================================== 2026-03-05 01:03:36.398510 | orchestrator | Create admin user ------------------------------------------------------ 45.16s 2026-03-05 01:03:36.398518 | orchestrator | Restart ceph manager service ------------------------------------------- 24.61s 2026-03-05 01:03:36.398526 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.02s 2026-03-05 01:03:36.398540 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.60s 2026-03-05 01:03:36.398548 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.57s 2026-03-05 01:03:36.398556 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.55s 2026-03-05 01:03:36.398564 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.41s 2026-03-05 01:03:36.398572 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.41s 2026-03-05 01:03:36.398580 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.28s 2026-03-05 01:03:36.398587 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.18s 2026-03-05 01:03:36.398595 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.21s 2026-03-05 01:03:39.437353 | orchestrator | 2026-03-05 01:03:39 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:03:39.438281 | orchestrator | 2026-03-05 01:03:39 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:03:39.438854 | orchestrator | 2026-03-05 01:03:39 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:03:39.439911 | orchestrator | 2026-03-05 01:03:39 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:03:39.439959 | orchestrator | 2026-03-05 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:42.471596 | orchestrator | 2026-03-05 01:03:42 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:03:42.471694 | orchestrator | 2026-03-05 01:03:42 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:03:42.472529 | orchestrator | 2026-03-05 01:03:42 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:03:42.474314 | orchestrator | 2026-03-05 01:03:42 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:03:42.474386 | orchestrator | 2026-03-05 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:45.507986 | orchestrator | 2026-03-05 01:03:45 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:03:45.508086 | orchestrator | 2026-03-05 01:03:45 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:03:45.513381 | orchestrator | 2026-03-05 01:03:45 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:03:45.514524 | orchestrator | 2026-03-05 01:03:45 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:03:45.514572 | orchestrator | 2026-03-05 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:48.562666 | orchestrator | 2026-03-05 01:03:48 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:03:48.564107 | orchestrator | 2026-03-05 01:03:48 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:03:48.564878 | orchestrator | 2026-03-05 01:03:48 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:03:48.567485 | orchestrator | 2026-03-05 01:03:48 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:03:48.567529 | orchestrator | 2026-03-05 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:51.613179 | orchestrator | 2026-03-05 01:03:51 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:03:51.614327 | orchestrator | 2026-03-05 01:03:51 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:03:51.615762 | orchestrator | 2026-03-05 01:03:51 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:03:51.617246 | orchestrator | 2026-03-05 01:03:51 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:03:51.617291 | orchestrator | 2026-03-05 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:54.650919 | orchestrator | 2026-03-05 01:03:54 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:03:54.652187 | orchestrator | 2026-03-05 01:03:54 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:03:54.654235 | orchestrator | 2026-03-05 01:03:54 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:03:54.655029 | orchestrator | 2026-03-05 01:03:54 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:03:54.655063 | orchestrator | 2026-03-05 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:57.692891 | orchestrator | 2026-03-05 01:03:57 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:03:57.693459 | orchestrator | 2026-03-05 01:03:57 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:03:57.694367 | orchestrator | 2026-03-05 01:03:57 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:03:57.695164 | orchestrator | 2026-03-05 01:03:57 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:03:57.695209 | orchestrator | 2026-03-05 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:00.735997 | orchestrator | 2026-03-05 01:04:00 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:04:00.736535 | orchestrator | 2026-03-05 01:04:00 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:04:00.740370 | orchestrator | 2026-03-05 01:04:00 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:04:00.741103 | orchestrator | 2026-03-05 01:04:00 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:04:00.741141 | orchestrator | 2026-03-05 01:04:00 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:03.776512 | orchestrator | 2026-03-05 01:04:03 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:04:03.776672 | orchestrator | 2026-03-05 01:04:03 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:04:03.778282 | orchestrator | 2026-03-05 01:04:03 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:04:03.778856 | orchestrator | 2026-03-05 01:04:03 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:04:03.778936 | orchestrator | 2026-03-05 01:04:03 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:06.813337 | orchestrator | 2026-03-05 01:04:06 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:04:06.813704 | orchestrator | 2026-03-05 01:04:06 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:04:06.817504 | orchestrator | 2026-03-05 01:04:06 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:04:06.818050 | orchestrator | 2026-03-05 01:04:06 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:04:06.818087 | orchestrator | 2026-03-05 01:04:06 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:09.857599 | orchestrator | 2026-03-05 01:04:09 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:04:09.858279 | orchestrator | 2026-03-05 01:04:09 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:04:09.859146 | orchestrator | 2026-03-05 01:04:09 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:04:09.860986 | orchestrator | 2026-03-05 01:04:09 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:04:09.861027 | orchestrator | 2026-03-05 01:04:09 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:12.889154 | orchestrator | 2026-03-05 01:04:12 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:04:12.890345 | orchestrator | 2026-03-05 01:04:12 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:04:12.892446 | orchestrator | 2026-03-05 01:04:12 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:04:12.893603 | orchestrator | 2026-03-05 01:04:12 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:04:12.893653 | orchestrator | 2026-03-05 01:04:12 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:15.928345 | orchestrator | 2026-03-05 01:04:15 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:04:15.928729 | orchestrator | 2026-03-05 01:04:15 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:04:15.929495 | orchestrator | 2026-03-05 01:04:15 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:04:15.930421 | orchestrator | 2026-03-05 01:04:15 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:04:15.930476 | orchestrator | 2026-03-05 01:04:15 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:18.971775 | orchestrator | 2026-03-05 01:04:18 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:04:18.972201 | orchestrator | 2026-03-05 01:04:18 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:04:18.973622 | orchestrator | 2026-03-05 01:04:18 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:04:18.976525 | orchestrator | 2026-03-05 01:04:18 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:04:18.976583 | orchestrator | 2026-03-05 01:04:18 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:22.017360 | orchestrator | 2026-03-05 01:04:22 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:04:22.020291 | orchestrator | 2026-03-05 01:04:22 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:04:22.022974 | orchestrator | 2026-03-05 01:04:22 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:04:22.023971 | orchestrator | 2026-03-05 01:04:22 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:04:22.024010 | orchestrator | 2026-03-05 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:25.070828 | orchestrator | 2026-03-05 01:04:25 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:04:25.070932 | orchestrator | 2026-03-05 01:04:25 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:04:25.073281 | orchestrator | 2026-03-05 01:04:25 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:04:25.074655 | orchestrator | 2026-03-05 01:04:25 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:04:25.074733 | orchestrator | 2026-03-05 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:28.123741 | orchestrator | 2026-03-05 01:04:28 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:04:28.125846 | orchestrator | 2026-03-05 01:04:28 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:04:28.127867 | orchestrator | 2026-03-05 01:04:28 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:04:28.129890 | orchestrator | 2026-03-05 01:04:28 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:04:28.129931 | orchestrator | 2026-03-05 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:31.249184 | orchestrator | 2026-03-05 01:04:31 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:04:31.250563 | orchestrator | 2026-03-05 01:04:31 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:04:31.252646 | orchestrator | 2026-03-05 01:04:31 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:04:31.253725 | orchestrator | 2026-03-05 01:04:31 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:04:31.253774 | orchestrator | 2026-03-05 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:34.292036 | orchestrator | 2026-03-05 01:04:34 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:04:34.294112 | orchestrator | 2026-03-05 01:04:34 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:04:34.295056 | orchestrator | 2026-03-05 01:04:34 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:04:34.297200 | orchestrator | 2026-03-05 01:04:34 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:04:34.297245 | orchestrator | 2026-03-05 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:37.360937 | orchestrator | 2026-03-05 01:04:37 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:04:37.362470 | orchestrator | 2026-03-05 01:04:37 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:04:37.362987 | orchestrator | 2026-03-05 01:04:37 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:04:37.364448 | orchestrator | 2026-03-05 01:04:37 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:04:37.364650 | orchestrator | 2026-03-05 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:40.414717 | orchestrator | 2026-03-05 01:04:40 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:04:40.420375 | orchestrator | 2026-03-05 01:04:40 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:04:40.424694 | orchestrator | 2026-03-05 01:04:40 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:04:40.430257 | orchestrator | 2026-03-05 01:04:40 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:04:40.430356 | orchestrator | 2026-03-05 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:43.484080 | orchestrator | 2026-03-05 01:04:43 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:04:43.484469 | orchestrator | 2026-03-05 01:04:43 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:04:43.485850 | orchestrator | 2026-03-05 01:04:43 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:04:43.486134 | orchestrator | 2026-03-05 01:04:43 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:04:43.486237 | orchestrator | 2026-03-05 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:46.533262 | orchestrator | 2026-03-05 01:04:46 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:04:46.536538 | orchestrator | 2026-03-05 01:04:46 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:04:46.538887 | orchestrator | 2026-03-05 01:04:46 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:04:46.544238 | orchestrator | 2026-03-05 01:04:46 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:04:46.544311 | orchestrator | 2026-03-05 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:49.585773 | orchestrator | 2026-03-05 01:04:49 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:04:49.587210 | orchestrator | 2026-03-05 01:04:49 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:04:49.588711 | orchestrator | 2026-03-05 01:04:49 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:04:49.589738 | orchestrator | 2026-03-05 01:04:49 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:04:49.589777 | orchestrator | 2026-03-05 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:52.629065 | orchestrator | 2026-03-05 01:04:52 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:04:52.629209 | orchestrator | 2026-03-05 01:04:52 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:04:52.631961 | orchestrator | 2026-03-05 01:04:52 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:04:52.634599 | orchestrator | 2026-03-05 01:04:52 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:04:52.634671 | orchestrator | 2026-03-05 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:55.683172 | orchestrator | 2026-03-05 01:04:55 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:04:55.684858 | orchestrator | 2026-03-05 01:04:55 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:04:55.687845 | orchestrator | 2026-03-05 01:04:55 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:04:55.688610 | orchestrator | 2026-03-05 01:04:55 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:04:55.688727 | orchestrator | 2026-03-05 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:58.902143 | orchestrator | 2026-03-05 01:04:58 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:04:58.903737 | orchestrator | 2026-03-05 01:04:58 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:04:58.905500 | orchestrator | 2026-03-05 01:04:58 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:04:58.908168 | orchestrator | 2026-03-05 01:04:58 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:04:58.908323 | orchestrator | 2026-03-05 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:01.981098 | orchestrator | 2026-03-05 01:05:01 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:05:01.984385 | orchestrator | 2026-03-05 01:05:01 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:05:01.987289 | orchestrator | 2026-03-05 01:05:01 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:05:01.991915 | orchestrator | 2026-03-05 01:05:01 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:05:01.992873 | orchestrator | 2026-03-05 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:05.030269 | orchestrator | 2026-03-05 01:05:05 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:05:05.035127 | orchestrator | 2026-03-05 01:05:05 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:05:05.039975 | orchestrator | 2026-03-05 01:05:05 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:05:05.043796 | orchestrator | 2026-03-05 01:05:05 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:05:05.043852 | orchestrator | 2026-03-05 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:08.079236 | orchestrator | 2026-03-05 01:05:08 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:05:08.079901 | orchestrator | 2026-03-05 01:05:08 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:05:08.080896 | orchestrator | 2026-03-05 01:05:08 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:05:08.082123 | orchestrator | 2026-03-05 01:05:08 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:05:08.082208 | orchestrator | 2026-03-05 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:11.116918 | orchestrator | 2026-03-05 01:05:11 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:05:11.117752 | orchestrator | 2026-03-05 01:05:11 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:05:11.119610 | orchestrator | 2026-03-05 01:05:11 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:05:11.121058 | orchestrator | 2026-03-05 01:05:11 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:05:11.121209 | orchestrator | 2026-03-05 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:14.161761 | orchestrator | 2026-03-05 01:05:14 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:05:14.163905 | orchestrator | 2026-03-05 01:05:14 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:05:14.165970 | orchestrator | 2026-03-05 01:05:14 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:05:14.167560 | orchestrator | 2026-03-05 01:05:14 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:05:14.167621 | orchestrator | 2026-03-05 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:17.212134 | orchestrator | 2026-03-05 01:05:17 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:05:17.212238 | orchestrator | 2026-03-05 01:05:17 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:05:17.214461 | orchestrator | 2026-03-05 01:05:17 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:05:17.216311 | orchestrator | 2026-03-05 01:05:17 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:05:17.216619 | orchestrator | 2026-03-05 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:20.267458 | orchestrator | 2026-03-05 01:05:20 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:05:20.272002 | orchestrator | 2026-03-05 01:05:20 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:05:20.282276 | orchestrator | 2026-03-05 01:05:20 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:05:20.285097 | orchestrator | 2026-03-05 01:05:20 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:05:20.285196 | orchestrator | 2026-03-05 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:23.324213 | orchestrator | 2026-03-05 01:05:23 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:05:23.324932 | orchestrator | 2026-03-05 01:05:23 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:05:23.326367 | orchestrator | 2026-03-05 01:05:23 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:05:23.328091 | orchestrator | 2026-03-05 01:05:23 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:05:23.328191 | orchestrator | 2026-03-05 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:26.364297 | orchestrator | 2026-03-05 01:05:26 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:05:26.364949 | orchestrator | 2026-03-05 01:05:26 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:05:26.365873 | orchestrator | 2026-03-05 01:05:26 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:05:26.367300 | orchestrator | 2026-03-05 01:05:26 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:05:26.367359 | orchestrator | 2026-03-05 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:29.401768 | orchestrator | 2026-03-05 01:05:29 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:05:29.402463 | orchestrator | 2026-03-05 01:05:29 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:05:29.404269 | orchestrator | 2026-03-05 01:05:29 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:05:29.404818 | orchestrator | 2026-03-05 01:05:29 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:05:29.404857 | orchestrator | 2026-03-05 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:32.454006 | orchestrator | 2026-03-05 01:05:32 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:05:32.454423 | orchestrator | 2026-03-05 01:05:32 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:05:32.455513 | orchestrator | 2026-03-05 01:05:32 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:05:32.456684 | orchestrator | 2026-03-05 01:05:32 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:05:32.456768 | orchestrator | 2026-03-05 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:35.515344 | orchestrator | 2026-03-05 01:05:35 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:05:35.518225 | orchestrator | 2026-03-05 01:05:35 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:05:35.520138 | orchestrator | 2026-03-05 01:05:35 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:05:35.521986 | orchestrator | 2026-03-05 01:05:35 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:05:35.522073 | orchestrator | 2026-03-05 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:38.666450 | orchestrator | 2026-03-05 01:05:38 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:05:38.668034 | orchestrator | 2026-03-05 01:05:38 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:05:38.670529 | orchestrator | 2026-03-05 01:05:38 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:05:38.672051 | orchestrator | 2026-03-05 01:05:38 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:05:38.672125 | orchestrator | 2026-03-05 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:41.722372 | orchestrator | 2026-03-05 01:05:41 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:05:41.722613 | orchestrator | 2026-03-05 01:05:41 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:05:41.723870 | orchestrator | 2026-03-05 01:05:41 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:05:41.724681 | orchestrator | 2026-03-05 01:05:41 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:05:41.724726 | orchestrator | 2026-03-05 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:44.771722 | orchestrator | 2026-03-05 01:05:44 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:05:44.772282 | orchestrator | 2026-03-05 01:05:44 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:05:44.772879 | orchestrator | 2026-03-05 01:05:44 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:05:44.773655 | orchestrator | 2026-03-05 01:05:44 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:05:44.773684 | orchestrator | 2026-03-05 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:47.810580 | orchestrator | 2026-03-05 01:05:47 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:05:47.811714 | orchestrator | 2026-03-05 01:05:47 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:05:47.813659 | orchestrator | 2026-03-05 01:05:47 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:05:47.814359 | orchestrator | 2026-03-05 01:05:47 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:05:47.814387 | orchestrator | 2026-03-05 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:50.853766 | orchestrator | 2026-03-05 01:05:50 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:05:50.857417 | orchestrator | 2026-03-05 01:05:50 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:05:50.859673 | orchestrator | 2026-03-05 01:05:50 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:05:50.862129 | orchestrator | 2026-03-05 01:05:50 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:05:50.862202 | orchestrator | 2026-03-05 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:53.902499 | orchestrator | 2026-03-05 01:05:53 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:05:53.902978 | orchestrator | 2026-03-05 01:05:53 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:05:53.903850 | orchestrator | 2026-03-05 01:05:53 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:05:53.904593 | orchestrator | 2026-03-05 01:05:53 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:05:53.904647 | orchestrator | 2026-03-05 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:56.938049 | orchestrator | 2026-03-05 01:05:56 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:05:56.938393 | orchestrator | 2026-03-05 01:05:56 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:05:56.939926 | orchestrator | 2026-03-05 01:05:56 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:05:56.942378 | orchestrator | 2026-03-05 01:05:56 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:05:56.942436 | orchestrator | 2026-03-05 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:59.981052 | orchestrator | 2026-03-05 01:05:59 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:05:59.981739 | orchestrator | 2026-03-05 01:05:59 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:05:59.984679 | orchestrator | 2026-03-05 01:05:59 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:05:59.985272 | orchestrator | 2026-03-05 01:05:59 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:05:59.985298 | orchestrator | 2026-03-05 01:05:59 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:03.032187 | orchestrator | 2026-03-05 01:06:03 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:06:03.032879 | orchestrator | 2026-03-05 01:06:03 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:06:03.035682 | orchestrator | 2026-03-05 01:06:03 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:06:03.036931 | orchestrator | 2026-03-05 01:06:03 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:06:03.037044 | orchestrator | 2026-03-05 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:06.088436 | orchestrator | 2026-03-05 01:06:06 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:06:06.090109 | orchestrator | 2026-03-05 01:06:06 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:06:06.091975 | orchestrator | 2026-03-05 01:06:06 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:06:06.093484 | orchestrator | 2026-03-05 01:06:06 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:06:06.093504 | orchestrator | 2026-03-05 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:09.125325 | orchestrator | 2026-03-05 01:06:09 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:06:09.127161 | orchestrator | 2026-03-05 01:06:09 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:06:09.127922 | orchestrator | 2026-03-05 01:06:09 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:06:09.128945 | orchestrator | 2026-03-05 01:06:09 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state STARTED 2026-03-05 01:06:09.129050 | orchestrator | 2026-03-05 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:12.178578 | orchestrator | 2026-03-05 01:06:12 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:06:12.179249 | orchestrator | 2026-03-05 01:06:12 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:06:12.180696 | orchestrator | 2026-03-05 01:06:12 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state STARTED 2026-03-05 01:06:12.183615 | orchestrator | 2026-03-05 01:06:12 | INFO  | Task 30a2ba67-0000-47e3-ac34-ad13cde36007 is in state SUCCESS 2026-03-05 01:06:12.185569 | orchestrator | 2026-03-05 01:06:12.185645 | orchestrator | 2026-03-05 01:06:12.185652 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:06:12.185657 | orchestrator | 2026-03-05 01:06:12.185661 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:06:12.185679 | orchestrator | Thursday 05 March 2026 01:02:29 +0000 (0:00:00.337) 0:00:00.337 ******** 2026-03-05 01:06:12.185683 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:06:12.185688 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:06:12.185692 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:06:12.185696 | orchestrator | 2026-03-05 01:06:12.185701 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:06:12.185705 | orchestrator | Thursday 05 March 2026 01:02:29 +0000 (0:00:00.444) 0:00:00.781 ******** 2026-03-05 01:06:12.185721 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-05 01:06:12.185726 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-05 01:06:12.185730 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-05 01:06:12.185734 | orchestrator | 2026-03-05 01:06:12.185738 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-05 01:06:12.185742 | orchestrator | 2026-03-05 01:06:12.185746 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-05 01:06:12.185750 | orchestrator | Thursday 05 March 2026 01:02:30 +0000 (0:00:00.617) 0:00:01.399 ******** 2026-03-05 01:06:12.185754 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:06:12.185759 | orchestrator | 2026-03-05 01:06:12.185763 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-05 01:06:12.185767 | orchestrator | Thursday 05 March 2026 01:02:31 +0000 (0:00:00.724) 0:00:02.123 ******** 2026-03-05 01:06:12.185771 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-05 01:06:12.185775 | orchestrator | 2026-03-05 01:06:12.185779 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-05 01:06:12.185783 | orchestrator | Thursday 05 March 2026 01:02:34 +0000 (0:00:03.155) 0:00:05.279 ******** 2026-03-05 01:06:12.185787 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-05 01:06:12.185791 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-05 01:06:12.185795 | orchestrator | 2026-03-05 01:06:12.185799 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-05 01:06:12.185803 | orchestrator | Thursday 05 March 2026 01:02:41 +0000 (0:00:06.785) 0:00:12.065 ******** 2026-03-05 01:06:12.185807 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-05 01:06:12.185811 | orchestrator | 2026-03-05 01:06:12.185815 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-05 01:06:12.185819 | orchestrator | Thursday 05 March 2026 01:02:44 +0000 (0:00:03.272) 0:00:15.337 ******** 2026-03-05 01:06:12.185832 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-05 01:06:12.185836 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-05 01:06:12.185840 | orchestrator | 2026-03-05 01:06:12.185844 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-05 01:06:12.185865 | orchestrator | Thursday 05 March 2026 01:02:49 +0000 (0:00:04.806) 0:00:20.144 ******** 2026-03-05 01:06:12.185869 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-05 01:06:12.185873 | orchestrator | 2026-03-05 01:06:12.185891 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-05 01:06:12.185896 | orchestrator | Thursday 05 March 2026 01:02:53 +0000 (0:00:04.094) 0:00:24.239 ******** 2026-03-05 01:06:12.185924 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-05 01:06:12.185932 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-05 01:06:12.185938 | orchestrator | 2026-03-05 01:06:12.185944 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-05 01:06:12.185951 | orchestrator | Thursday 05 March 2026 01:03:02 +0000 (0:00:09.296) 0:00:33.535 ******** 2026-03-05 01:06:12.185982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:12.186004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:12.186009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:12.186052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.186067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.186076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.186134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.186154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.186162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.186168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.186180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.186187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.186193 | orchestrator | 2026-03-05 01:06:12.186200 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-05 01:06:12.186207 | orchestrator | Thursday 05 March 2026 01:03:06 +0000 (0:00:04.475) 0:00:38.011 ******** 2026-03-05 01:06:12.186213 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:12.186219 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:12.186225 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:12.186231 | orchestrator | 2026-03-05 01:06:12.186237 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-05 01:06:12.186243 | orchestrator | Thursday 05 March 2026 01:03:07 +0000 (0:00:01.024) 0:00:39.035 ******** 2026-03-05 01:06:12.186249 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:06:12.186256 | orchestrator | 2026-03-05 01:06:12.186262 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-05 01:06:12.186268 | orchestrator | Thursday 05 March 2026 01:03:10 +0000 (0:00:02.243) 0:00:41.279 ******** 2026-03-05 01:06:12.186279 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-05 01:06:12.186287 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-05 01:06:12.186296 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-05 01:06:12.186307 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-05 01:06:12.186313 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-05 01:06:12.186318 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-05 01:06:12.186324 | orchestrator | 2026-03-05 01:06:12.186330 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-05 01:06:12.186336 | orchestrator | Thursday 05 March 2026 01:03:13 +0000 (0:00:03.295) 0:00:44.575 ******** 2026-03-05 01:06:12.186343 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-05 01:06:12.186357 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-05 01:06:12.186364 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-05 01:06:12.186370 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-05 01:06:12.186387 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-05 01:06:12.186395 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-05 01:06:12.186404 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-05 01:06:12.186409 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-05 01:06:12.186414 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-05 01:06:12.186425 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-05 01:06:12.186437 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-05 01:06:12.186448 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-05 01:06:12.186455 | orchestrator | 2026-03-05 01:06:12.186462 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-05 01:06:12.186468 | orchestrator | Thursday 05 March 2026 01:03:18 +0000 (0:00:05.346) 0:00:49.921 ******** 2026-03-05 01:06:12.186475 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-05 01:06:12.186480 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-05 01:06:12.186484 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-05 01:06:12.186488 | orchestrator | 2026-03-05 01:06:12.186491 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-05 01:06:12.186495 | orchestrator | Thursday 05 March 2026 01:03:21 +0000 (0:00:02.948) 0:00:52.870 ******** 2026-03-05 01:06:12.186499 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-05 01:06:12.186503 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-05 01:06:12.186507 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-05 01:06:12.186510 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-05 01:06:12.186514 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-05 01:06:12.186518 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-05 01:06:12.186522 | orchestrator | 2026-03-05 01:06:12.186525 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-05 01:06:12.186529 | orchestrator | Thursday 05 March 2026 01:03:26 +0000 (0:00:04.712) 0:00:57.582 ******** 2026-03-05 01:06:12.186533 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-05 01:06:12.186537 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-05 01:06:12.186541 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-05 01:06:12.186545 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-05 01:06:12.186548 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-05 01:06:12.186552 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-05 01:06:12.186556 | orchestrator | 2026-03-05 01:06:12.186560 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-05 01:06:12.186563 | orchestrator | Thursday 05 March 2026 01:03:28 +0000 (0:00:01.605) 0:00:59.188 ******** 2026-03-05 01:06:12.186567 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:12.186571 | orchestrator | 2026-03-05 01:06:12.186575 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-05 01:06:12.186578 | orchestrator | Thursday 05 March 2026 01:03:28 +0000 (0:00:00.276) 0:00:59.465 ******** 2026-03-05 01:06:12.186582 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:12.186586 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:12.186589 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:12.186593 | orchestrator | 2026-03-05 01:06:12.186600 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-05 01:06:12.186604 | orchestrator | Thursday 05 March 2026 01:03:29 +0000 (0:00:00.638) 0:01:00.104 ******** 2026-03-05 01:06:12.186608 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:06:12.186612 | orchestrator | 2026-03-05 01:06:12.186616 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-05 01:06:12.186630 | orchestrator | Thursday 05 March 2026 01:03:30 +0000 (0:00:01.010) 0:01:01.114 ******** 2026-03-05 01:06:12.186637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:12.186642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:12.186646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:12.186651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.186655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.186675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.186679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.186683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.186687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.186691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.186695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.186709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.186714 | orchestrator | 2026-03-05 01:06:12.186717 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-05 01:06:12.186721 | orchestrator | Thursday 05 March 2026 01:03:34 +0000 (0:00:04.920) 0:01:06.034 ******** 2026-03-05 01:06:12.186725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-05 01:06:12.186729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.186733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.186737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.186744 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:12.186755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-05 01:06:12.186759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.186763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.186767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.186771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-05 01:06:12.186780 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:12.186784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.186794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.186798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.186802 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:12.186806 | orchestrator | 2026-03-05 01:06:12.186810 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-05 01:06:12.186814 | orchestrator | Thursday 05 March 2026 01:03:36 +0000 (0:00:01.280) 0:01:07.315 ******** 2026-03-05 01:06:12.186818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-05 01:06:12.186822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.186829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.186839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.186843 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:12.186847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-05 01:06:12.186851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.186855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.186862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.186866 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:12.186870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-05 01:06:12.186880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.186884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.186888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.186895 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:12.186903 | orchestrator | 2026-03-05 01:06:12.186912 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-05 01:06:12.186925 | orchestrator | Thursday 05 March 2026 01:03:39 +0000 (0:00:03.678) 0:01:10.993 ******** 2026-03-05 01:06:12.186931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:12.186938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:12.186952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:12.186960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.186966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.186973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.186989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.186996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.187011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.187019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.187025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.187037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.187041 | orchestrator | 2026-03-05 01:06:12.187045 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-05 01:06:12.187049 | orchestrator | Thursday 05 March 2026 01:03:45 +0000 (0:00:05.872) 0:01:16.866 ******** 2026-03-05 01:06:12.187053 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-05 01:06:12.187056 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-05 01:06:12.187060 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-05 01:06:12.187064 | orchestrator | 2026-03-05 01:06:12.187068 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-05 01:06:12.187072 | orchestrator | Thursday 05 March 2026 01:03:48 +0000 (0:00:02.704) 0:01:19.571 ******** 2026-03-05 01:06:12.187079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:12.187114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:12.187118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:12.187127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.187131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.187136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.187144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.187152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.187156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.187163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.187167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.187172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.187178 | orchestrator | 2026-03-05 01:06:12.187184 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-05 01:06:12.187194 | orchestrator | Thursday 05 March 2026 01:04:06 +0000 (0:00:18.293) 0:01:37.864 ******** 2026-03-05 01:06:12.187202 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:12.187208 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:06:12.187215 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:06:12.187221 | orchestrator | 2026-03-05 01:06:12.187227 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-05 01:06:12.187237 | orchestrator | Thursday 05 March 2026 01:04:09 +0000 (0:00:02.736) 0:01:40.601 ******** 2026-03-05 01:06:12.187247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-05 01:06:12.187259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.187265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.187273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.187279 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:12.187285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-05 01:06:12.187300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.187307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.187318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.187325 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:12.187331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-05 01:06:12.187338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.187346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.187360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 01:06:12.187371 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:12.187378 | orchestrator | 2026-03-05 01:06:12.187385 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-05 01:06:12.187392 | orchestrator | Thursday 05 March 2026 01:04:10 +0000 (0:00:01.190) 0:01:41.792 ******** 2026-03-05 01:06:12.187400 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:12.187407 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:12.187414 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:12.187419 | orchestrator | 2026-03-05 01:06:12.187423 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-05 01:06:12.187427 | orchestrator | Thursday 05 March 2026 01:04:11 +0000 (0:00:00.308) 0:01:42.100 ******** 2026-03-05 01:06:12.187431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:12.187435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:12.187439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:12.187449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.187457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.187461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.187465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.187469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.187473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.187480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.187493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.187500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:12.187506 | orchestrator | 2026-03-05 01:06:12.187512 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-05 01:06:12.187518 | orchestrator | Thursday 05 March 2026 01:04:14 +0000 (0:00:03.474) 0:01:45.575 ******** 2026-03-05 01:06:12.187524 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:12.187530 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:12.187536 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:12.187542 | orchestrator | 2026-03-05 01:06:12.187548 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-05 01:06:12.187554 | orchestrator | Thursday 05 March 2026 01:04:15 +0000 (0:00:01.016) 0:01:46.592 ******** 2026-03-05 01:06:12.187559 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:12.187566 | orchestrator | 2026-03-05 01:06:12.187572 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-05 01:06:12.187578 | orchestrator | Thursday 05 March 2026 01:04:17 +0000 (0:00:02.369) 0:01:48.961 ******** 2026-03-05 01:06:12.187584 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:12.187591 | orchestrator | 2026-03-05 01:06:12.187598 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-05 01:06:12.187604 | orchestrator | Thursday 05 March 2026 01:04:20 +0000 (0:00:02.783) 0:01:51.746 ******** 2026-03-05 01:06:12.187611 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:12.187617 | orchestrator | 2026-03-05 01:06:12.187623 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-05 01:06:12.187630 | orchestrator | Thursday 05 March 2026 01:04:41 +0000 (0:00:20.834) 0:02:12.580 ******** 2026-03-05 01:06:12.187636 | orchestrator | 2026-03-05 01:06:12.187642 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-05 01:06:12.187648 | orchestrator | Thursday 05 March 2026 01:04:41 +0000 (0:00:00.262) 0:02:12.843 ******** 2026-03-05 01:06:12.187654 | orchestrator | 2026-03-05 01:06:12.187660 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-05 01:06:12.187667 | orchestrator | Thursday 05 March 2026 01:04:42 +0000 (0:00:00.295) 0:02:13.138 ******** 2026-03-05 01:06:12.187673 | orchestrator | 2026-03-05 01:06:12.187680 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-05 01:06:12.187686 | orchestrator | Thursday 05 March 2026 01:04:42 +0000 (0:00:00.225) 0:02:13.364 ******** 2026-03-05 01:06:12.187698 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:12.187707 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:06:12.187711 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:06:12.187715 | orchestrator | 2026-03-05 01:06:12.187719 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-05 01:06:12.187723 | orchestrator | Thursday 05 March 2026 01:05:15 +0000 (0:00:32.959) 0:02:46.323 ******** 2026-03-05 01:06:12.187726 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:12.187730 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:06:12.187734 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:06:12.187738 | orchestrator | 2026-03-05 01:06:12.187741 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-05 01:06:12.187745 | orchestrator | Thursday 05 March 2026 01:05:27 +0000 (0:00:12.667) 0:02:58.991 ******** 2026-03-05 01:06:12.187749 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:12.187753 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:06:12.187757 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:06:12.187760 | orchestrator | 2026-03-05 01:06:12.187764 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-05 01:06:12.187768 | orchestrator | Thursday 05 March 2026 01:05:59 +0000 (0:00:31.354) 0:03:30.346 ******** 2026-03-05 01:06:12.187772 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:12.187776 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:06:12.187779 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:06:12.187783 | orchestrator | 2026-03-05 01:06:12.187787 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-05 01:06:12.187835 | orchestrator | Thursday 05 March 2026 01:06:09 +0000 (0:00:09.819) 0:03:40.166 ******** 2026-03-05 01:06:12.187840 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:12.187844 | orchestrator | 2026-03-05 01:06:12.187847 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:06:12.187856 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-05 01:06:12.187861 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 01:06:12.187865 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 01:06:12.187869 | orchestrator | 2026-03-05 01:06:12.187873 | orchestrator | 2026-03-05 01:06:12.187877 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:06:12.187880 | orchestrator | Thursday 05 March 2026 01:06:09 +0000 (0:00:00.322) 0:03:40.489 ******** 2026-03-05 01:06:12.187884 | orchestrator | =============================================================================== 2026-03-05 01:06:12.187889 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 32.96s 2026-03-05 01:06:12.187895 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 31.35s 2026-03-05 01:06:12.187903 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.84s 2026-03-05 01:06:12.187912 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 18.29s 2026-03-05 01:06:12.187918 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 12.67s 2026-03-05 01:06:12.187924 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 9.82s 2026-03-05 01:06:12.187930 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 9.30s 2026-03-05 01:06:12.187936 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.79s 2026-03-05 01:06:12.187943 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.87s 2026-03-05 01:06:12.187949 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 5.35s 2026-03-05 01:06:12.187956 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.92s 2026-03-05 01:06:12.187966 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.81s 2026-03-05 01:06:12.187970 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 4.71s 2026-03-05 01:06:12.187974 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 4.47s 2026-03-05 01:06:12.187977 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 4.09s 2026-03-05 01:06:12.187981 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 3.68s 2026-03-05 01:06:12.187985 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.48s 2026-03-05 01:06:12.187989 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 3.30s 2026-03-05 01:06:12.187992 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.27s 2026-03-05 01:06:12.187996 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.16s 2026-03-05 01:06:12.188000 | orchestrator | 2026-03-05 01:06:12 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:06:12.188004 | orchestrator | 2026-03-05 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:15.230784 | orchestrator | 2026-03-05 01:06:15 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:06:15.230857 | orchestrator | 2026-03-05 01:06:15 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:06:15.233625 | orchestrator | 2026-03-05 01:06:15 | INFO  | Task 4a46188d-de68-4ab4-9d80-df77858d649e is in state SUCCESS 2026-03-05 01:06:15.235009 | orchestrator | 2026-03-05 01:06:15.235068 | orchestrator | 2026-03-05 01:06:15.235076 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:06:15.235112 | orchestrator | 2026-03-05 01:06:15.235118 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:06:15.235127 | orchestrator | Thursday 05 March 2026 01:02:23 +0000 (0:00:00.620) 0:00:00.620 ******** 2026-03-05 01:06:15.235134 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:06:15.235142 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:06:15.235150 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:06:15.235157 | orchestrator | 2026-03-05 01:06:15.235162 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:06:15.235166 | orchestrator | Thursday 05 March 2026 01:02:24 +0000 (0:00:00.991) 0:00:01.611 ******** 2026-03-05 01:06:15.235170 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-05 01:06:15.235175 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-05 01:06:15.235179 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-05 01:06:15.235183 | orchestrator | 2026-03-05 01:06:15.235187 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-05 01:06:15.235191 | orchestrator | 2026-03-05 01:06:15.235195 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-05 01:06:15.235199 | orchestrator | Thursday 05 March 2026 01:02:25 +0000 (0:00:01.586) 0:00:03.198 ******** 2026-03-05 01:06:15.235202 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:06:15.235207 | orchestrator | 2026-03-05 01:06:15.235211 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-05 01:06:15.235215 | orchestrator | Thursday 05 March 2026 01:02:27 +0000 (0:00:01.680) 0:00:04.879 ******** 2026-03-05 01:06:15.235233 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-05 01:06:15.235261 | orchestrator | 2026-03-05 01:06:15.235272 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-05 01:06:15.235278 | orchestrator | Thursday 05 March 2026 01:02:31 +0000 (0:00:03.926) 0:00:08.805 ******** 2026-03-05 01:06:15.235284 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-05 01:06:15.235313 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-05 01:06:15.235319 | orchestrator | 2026-03-05 01:06:15.235325 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-05 01:06:15.235331 | orchestrator | Thursday 05 March 2026 01:02:37 +0000 (0:00:06.427) 0:00:15.233 ******** 2026-03-05 01:06:15.235336 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-05 01:06:15.235341 | orchestrator | 2026-03-05 01:06:15.235347 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-05 01:06:15.235354 | orchestrator | Thursday 05 March 2026 01:02:41 +0000 (0:00:03.240) 0:00:18.473 ******** 2026-03-05 01:06:15.235360 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-05 01:06:15.235367 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-05 01:06:15.235373 | orchestrator | 2026-03-05 01:06:15.235379 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-05 01:06:15.235386 | orchestrator | Thursday 05 March 2026 01:02:45 +0000 (0:00:04.381) 0:00:22.855 ******** 2026-03-05 01:06:15.235392 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-05 01:06:15.235399 | orchestrator | 2026-03-05 01:06:15.235405 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-05 01:06:15.235411 | orchestrator | Thursday 05 March 2026 01:02:49 +0000 (0:00:04.221) 0:00:27.076 ******** 2026-03-05 01:06:15.235418 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-05 01:06:15.235424 | orchestrator | 2026-03-05 01:06:15.235433 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-05 01:06:15.235440 | orchestrator | Thursday 05 March 2026 01:02:54 +0000 (0:00:04.535) 0:00:31.611 ******** 2026-03-05 01:06:15.235471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 01:06:15.235487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 01:06:15.235502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 01:06:15.235510 | orchestrator | 2026-03-05 01:06:15.235516 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-05 01:06:15.235523 | orchestrator | Thursday 05 March 2026 01:03:04 +0000 (0:00:10.648) 0:00:42.259 ******** 2026-03-05 01:06:15.235530 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:06:15.235537 | orchestrator | 2026-03-05 01:06:15.235541 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-05 01:06:15.235549 | orchestrator | Thursday 05 March 2026 01:03:07 +0000 (0:00:03.094) 0:00:45.354 ******** 2026-03-05 01:06:15.235553 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:15.235557 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:06:15.235560 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:06:15.235564 | orchestrator | 2026-03-05 01:06:15.235568 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-05 01:06:15.235572 | orchestrator | Thursday 05 March 2026 01:03:16 +0000 (0:00:08.553) 0:00:53.908 ******** 2026-03-05 01:06:15.235576 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-05 01:06:15.235585 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-05 01:06:15.235588 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-05 01:06:15.235592 | orchestrator | 2026-03-05 01:06:15.235596 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-05 01:06:15.235600 | orchestrator | Thursday 05 March 2026 01:03:18 +0000 (0:00:02.224) 0:00:56.132 ******** 2026-03-05 01:06:15.235603 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-05 01:06:15.235608 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-05 01:06:15.235612 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-05 01:06:15.235615 | orchestrator | 2026-03-05 01:06:15.235623 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-05 01:06:15.235626 | orchestrator | Thursday 05 March 2026 01:03:20 +0000 (0:00:01.449) 0:00:57.581 ******** 2026-03-05 01:06:15.235630 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:06:15.235634 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:06:15.235638 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:06:15.235641 | orchestrator | 2026-03-05 01:06:15.235645 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-05 01:06:15.235649 | orchestrator | Thursday 05 March 2026 01:03:21 +0000 (0:00:01.015) 0:00:58.597 ******** 2026-03-05 01:06:15.235653 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:15.235656 | orchestrator | 2026-03-05 01:06:15.235660 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-05 01:06:15.235664 | orchestrator | Thursday 05 March 2026 01:03:21 +0000 (0:00:00.187) 0:00:58.785 ******** 2026-03-05 01:06:15.235668 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:15.235671 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:15.235677 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:15.235683 | orchestrator | 2026-03-05 01:06:15.235689 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-05 01:06:15.235694 | orchestrator | Thursday 05 March 2026 01:03:21 +0000 (0:00:00.534) 0:00:59.319 ******** 2026-03-05 01:06:15.235702 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:06:15.235710 | orchestrator | 2026-03-05 01:06:15.235718 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-05 01:06:15.235723 | orchestrator | Thursday 05 March 2026 01:03:23 +0000 (0:00:01.120) 0:01:00.440 ******** 2026-03-05 01:06:15.235729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 01:06:15.235750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 01:06:15.235758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 01:06:15.235763 | orchestrator | 2026-03-05 01:06:15.235770 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-05 01:06:15.235776 | orchestrator | Thursday 05 March 2026 01:03:30 +0000 (0:00:07.926) 0:01:08.367 ******** 2026-03-05 01:06:15.235791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-05 01:06:15.235801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-05 01:06:15.235807 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:15.235813 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:15.235824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-05 01:06:15.235835 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:15.235840 | orchestrator | 2026-03-05 01:06:15.235846 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-05 01:06:15.235852 | orchestrator | Thursday 05 March 2026 01:03:35 +0000 (0:00:04.946) 0:01:13.314 ******** 2026-03-05 01:06:15.235862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-05 01:06:15.235869 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:15.235876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-05 01:06:15.235889 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:15.235904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-05 01:06:15.235908 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:15.235912 | orchestrator | 2026-03-05 01:06:15.235915 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-05 01:06:15.235919 | orchestrator | Thursday 05 March 2026 01:03:43 +0000 (0:00:07.595) 0:01:20.910 ******** 2026-03-05 01:06:15.235923 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:15.235927 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:15.235931 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:15.235937 | orchestrator | 2026-03-05 01:06:15.235942 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-05 01:06:15.235948 | orchestrator | Thursday 05 March 2026 01:03:49 +0000 (0:00:05.844) 0:01:26.754 ******** 2026-03-05 01:06:15.235954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 01:06:15.235975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 01:06:15.235983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 01:06:15.235996 | orchestrator | 2026-03-05 01:06:15.236000 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-05 01:06:15.236004 | orchestrator | Thursday 05 March 2026 01:03:56 +0000 (0:00:06.924) 0:01:33.680 ******** 2026-03-05 01:06:15.236007 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:06:15.236011 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:06:15.236015 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:15.236019 | orchestrator | 2026-03-05 01:06:15.236022 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-05 01:06:15.236026 | orchestrator | Thursday 05 March 2026 01:04:06 +0000 (0:00:09.808) 0:01:43.488 ******** 2026-03-05 01:06:15.236030 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:15.236034 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:15.236037 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:15.236041 | orchestrator | 2026-03-05 01:06:15.236045 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-05 01:06:15.236048 | orchestrator | Thursday 05 March 2026 01:04:11 +0000 (0:00:05.158) 0:01:48.647 ******** 2026-03-05 01:06:15.236052 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:15.236056 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:15.236060 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:15.236063 | orchestrator | 2026-03-05 01:06:15.236067 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-05 01:06:15.236071 | orchestrator | Thursday 05 March 2026 01:04:16 +0000 (0:00:05.169) 0:01:53.816 ******** 2026-03-05 01:06:15.236075 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:15.236266 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:15.236276 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:15.236280 | orchestrator | 2026-03-05 01:06:15.236284 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-05 01:06:15.236288 | orchestrator | Thursday 05 March 2026 01:04:22 +0000 (0:00:06.522) 0:02:00.339 ******** 2026-03-05 01:06:15.236292 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:15.236295 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:15.236299 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:15.236303 | orchestrator | 2026-03-05 01:06:15.236307 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-05 01:06:15.236311 | orchestrator | Thursday 05 March 2026 01:04:27 +0000 (0:00:04.992) 0:02:05.332 ******** 2026-03-05 01:06:15.236314 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:15.236318 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:15.236322 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:15.236325 | orchestrator | 2026-03-05 01:06:15.236329 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-05 01:06:15.236333 | orchestrator | Thursday 05 March 2026 01:04:28 +0000 (0:00:00.341) 0:02:05.673 ******** 2026-03-05 01:06:15.236337 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-05 01:06:15.236341 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:15.236345 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-05 01:06:15.236349 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:15.236408 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-05 01:06:15.236413 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:15.236419 | orchestrator | 2026-03-05 01:06:15.236434 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-05 01:06:15.236451 | orchestrator | Thursday 05 March 2026 01:04:33 +0000 (0:00:05.217) 0:02:10.890 ******** 2026-03-05 01:06:15.236458 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:15.236464 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:06:15.236470 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:06:15.236490 | orchestrator | 2026-03-05 01:06:15.236496 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-05 01:06:15.236502 | orchestrator | Thursday 05 March 2026 01:04:39 +0000 (0:00:05.638) 0:02:16.529 ******** 2026-03-05 01:06:15.236516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 01:06:15.236531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 01:06:15.236544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 01:06:15.236557 | orchestrator | 2026-03-05 01:06:15.236564 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-05 01:06:15.236570 | orchestrator | Thursday 05 March 2026 01:04:45 +0000 (0:00:06.196) 0:02:22.725 ******** 2026-03-05 01:06:15.236576 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:15.236582 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:15.236588 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:15.236594 | orchestrator | 2026-03-05 01:06:15.236600 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-05 01:06:15.236607 | orchestrator | Thursday 05 March 2026 01:04:45 +0000 (0:00:00.669) 0:02:23.395 ******** 2026-03-05 01:06:15.236611 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:15.236614 | orchestrator | 2026-03-05 01:06:15.236618 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-05 01:06:15.236622 | orchestrator | Thursday 05 March 2026 01:04:48 +0000 (0:00:02.397) 0:02:25.793 ******** 2026-03-05 01:06:15.236626 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:15.236630 | orchestrator | 2026-03-05 01:06:15.236633 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-05 01:06:15.236637 | orchestrator | Thursday 05 March 2026 01:04:50 +0000 (0:00:02.475) 0:02:28.269 ******** 2026-03-05 01:06:15.236641 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:15.236645 | orchestrator | 2026-03-05 01:06:15.236648 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-05 01:06:15.236652 | orchestrator | Thursday 05 March 2026 01:04:53 +0000 (0:00:02.388) 0:02:30.658 ******** 2026-03-05 01:06:15.236656 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:15.236660 | orchestrator | 2026-03-05 01:06:15.236664 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-05 01:06:15.236668 | orchestrator | Thursday 05 March 2026 01:05:28 +0000 (0:00:35.497) 0:03:06.157 ******** 2026-03-05 01:06:15.236671 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:15.236675 | orchestrator | 2026-03-05 01:06:15.236679 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-05 01:06:15.236683 | orchestrator | Thursday 05 March 2026 01:05:31 +0000 (0:00:02.708) 0:03:08.866 ******** 2026-03-05 01:06:15.236687 | orchestrator | 2026-03-05 01:06:15.236694 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-05 01:06:15.236698 | orchestrator | Thursday 05 March 2026 01:05:32 +0000 (0:00:01.005) 0:03:09.871 ******** 2026-03-05 01:06:15.236705 | orchestrator | 2026-03-05 01:06:15.236709 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-05 01:06:15.236713 | orchestrator | Thursday 05 March 2026 01:05:32 +0000 (0:00:00.069) 0:03:09.941 ******** 2026-03-05 01:06:15.236717 | orchestrator | 2026-03-05 01:06:15.236721 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-05 01:06:15.236724 | orchestrator | Thursday 05 March 2026 01:05:32 +0000 (0:00:00.098) 0:03:10.039 ******** 2026-03-05 01:06:15.236728 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:15.236732 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:06:15.236736 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:06:15.236740 | orchestrator | 2026-03-05 01:06:15.236743 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:06:15.236749 | orchestrator | testbed-node-0 : ok=27  changed=20  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-05 01:06:15.236755 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-05 01:06:15.236759 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-05 01:06:15.236763 | orchestrator | 2026-03-05 01:06:15.236767 | orchestrator | 2026-03-05 01:06:15.236770 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:06:15.236775 | orchestrator | Thursday 05 March 2026 01:06:13 +0000 (0:00:41.136) 0:03:51.175 ******** 2026-03-05 01:06:15.236779 | orchestrator | =============================================================================== 2026-03-05 01:06:15.236783 | orchestrator | glance : Restart glance-api container ---------------------------------- 41.14s 2026-03-05 01:06:15.236787 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 35.50s 2026-03-05 01:06:15.236791 | orchestrator | glance : Ensuring config directories exist ----------------------------- 10.65s 2026-03-05 01:06:15.236817 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 9.81s 2026-03-05 01:06:15.236821 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 8.55s 2026-03-05 01:06:15.236825 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 7.93s 2026-03-05 01:06:15.236829 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 7.60s 2026-03-05 01:06:15.236832 | orchestrator | glance : Copying over config.json files for services -------------------- 6.93s 2026-03-05 01:06:15.236836 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 6.52s 2026-03-05 01:06:15.236840 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.43s 2026-03-05 01:06:15.236843 | orchestrator | glance : Check glance containers ---------------------------------------- 6.20s 2026-03-05 01:06:15.236847 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 5.85s 2026-03-05 01:06:15.236851 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 5.64s 2026-03-05 01:06:15.236855 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 5.22s 2026-03-05 01:06:15.236859 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.17s 2026-03-05 01:06:15.236863 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.16s 2026-03-05 01:06:15.236869 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.99s 2026-03-05 01:06:15.236875 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.95s 2026-03-05 01:06:15.236882 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.54s 2026-03-05 01:06:15.236888 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.38s 2026-03-05 01:06:15.237671 | orchestrator | 2026-03-05 01:06:15 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:06:15.237814 | orchestrator | 2026-03-05 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:18.274245 | orchestrator | 2026-03-05 01:06:18 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:06:18.275554 | orchestrator | 2026-03-05 01:06:18 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:06:18.277799 | orchestrator | 2026-03-05 01:06:18 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:06:18.279288 | orchestrator | 2026-03-05 01:06:18 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:06:18.289547 | orchestrator | 2026-03-05 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:21.378819 | orchestrator | 2026-03-05 01:06:21 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:06:21.381654 | orchestrator | 2026-03-05 01:06:21 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:06:21.384213 | orchestrator | 2026-03-05 01:06:21 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:06:21.386965 | orchestrator | 2026-03-05 01:06:21 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:06:21.387142 | orchestrator | 2026-03-05 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:24.438590 | orchestrator | 2026-03-05 01:06:24 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:06:24.442510 | orchestrator | 2026-03-05 01:06:24 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state STARTED 2026-03-05 01:06:24.445798 | orchestrator | 2026-03-05 01:06:24 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:06:24.448195 | orchestrator | 2026-03-05 01:06:24 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:06:24.448271 | orchestrator | 2026-03-05 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:27.490569 | orchestrator | 2026-03-05 01:06:27 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:06:27.495354 | orchestrator | 2026-03-05 01:06:27 | INFO  | Task 67b1ea4f-ef56-47e6-94d9-1d9931fcb65d is in state SUCCESS 2026-03-05 01:06:27.498217 | orchestrator | 2026-03-05 01:06:27.498315 | orchestrator | 2026-03-05 01:06:27.498327 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:06:27.498335 | orchestrator | 2026-03-05 01:06:27.498343 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:06:27.498366 | orchestrator | Thursday 05 March 2026 01:02:12 +0000 (0:00:00.345) 0:00:00.345 ******** 2026-03-05 01:06:27.498374 | orchestrator | ok: [testbed-manager] 2026-03-05 01:06:27.498396 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:06:27.498403 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:06:27.498409 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:06:27.498416 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:06:27.498422 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:06:27.498428 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:06:27.498435 | orchestrator | 2026-03-05 01:06:27.498441 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:06:27.498448 | orchestrator | Thursday 05 March 2026 01:02:13 +0000 (0:00:01.122) 0:00:01.468 ******** 2026-03-05 01:06:27.498455 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-05 01:06:27.498462 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-05 01:06:27.498469 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-05 01:06:27.498475 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-05 01:06:27.498502 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-05 01:06:27.498512 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-05 01:06:27.498519 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-05 01:06:27.498525 | orchestrator | 2026-03-05 01:06:27.498531 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-05 01:06:27.498538 | orchestrator | 2026-03-05 01:06:27.498544 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-05 01:06:27.498550 | orchestrator | Thursday 05 March 2026 01:02:14 +0000 (0:00:01.004) 0:00:02.473 ******** 2026-03-05 01:06:27.498557 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:06:27.498565 | orchestrator | 2026-03-05 01:06:27.498572 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-05 01:06:27.498578 | orchestrator | Thursday 05 March 2026 01:02:16 +0000 (0:00:02.001) 0:00:04.475 ******** 2026-03-05 01:06:27.498588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.498599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.498607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.498614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.498693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.498709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.498725 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-05 01:06:27.498733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.498740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.498746 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.498754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.498761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.498780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.498790 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.498794 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.498800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.498804 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.498808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.498812 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.498816 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.498892 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.498897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.498900 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.498904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.498908 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.498913 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.498917 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.498927 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-05 01:06:27.498937 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.498941 | orchestrator | 2026-03-05 01:06:27.498945 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-05 01:06:27.498949 | orchestrator | Thursday 05 March 2026 01:02:21 +0000 (0:00:05.222) 0:00:09.697 ******** 2026-03-05 01:06:27.498953 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:06:27.498958 | orchestrator | 2026-03-05 01:06:27.498961 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-05 01:06:27.498965 | orchestrator | Thursday 05 March 2026 01:02:23 +0000 (0:00:02.328) 0:00:12.026 ******** 2026-03-05 01:06:27.498969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.498973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.498977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.498981 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.498995 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-05 01:06:27.498999 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.499003 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.499007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.499011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.499015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.499019 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.499023 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.499035 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.499047 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.499051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.499055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.499059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.499063 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.499067 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.499092 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.499101 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.499109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.499113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.499117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.499121 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-05 01:06:27.499129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.499133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.499326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.499343 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.499350 | orchestrator | 2026-03-05 01:06:27.499357 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-05 01:06:27.499364 | orchestrator | Thursday 05 March 2026 01:02:31 +0000 (0:00:08.012) 0:00:20.039 ******** 2026-03-05 01:06:27.499370 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-05 01:06:27.499378 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:06:27.499384 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:06:27.499398 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-05 01:06:27.499410 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:06:27.499418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:06:27.499422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:06:27.499426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:06:27.499430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:06:27.499434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:06:27.499442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:06:27.499445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:06:27.499453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:06:27.499457 | orchestrator | skipping: [testbed-manager] 2026-03-05 01:06:27.499464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:06:27.499469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:06:27.499472 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:27.499476 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:27.499480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:06:27.499484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:06:27.499491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:06:27.499582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:06:27.499586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:06:27.499590 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:27.499602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:06:27.499606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:06:27.499610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-05 01:06:27.499614 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:06:27.499618 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:06:27.499625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:06:27.499629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-05 01:06:27.499633 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:06:27.499637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:06:27.499641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:06:27.499650 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-05 01:06:27.499654 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:06:27.499658 | orchestrator | 2026-03-05 01:06:27.499662 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-05 01:06:27.499667 | orchestrator | Thursday 05 March 2026 01:02:33 +0000 (0:00:02.065) 0:00:22.104 ******** 2026-03-05 01:06:27.499673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:06:27.499679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:06:27.499690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:06:27.499696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:06:27.499703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:06:27.499709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:06:27.499719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:06:27.499729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:06:27.499736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:06:27.499743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:06:27.499756 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-05 01:06:27.499772 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:06:27.499785 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:06:27.499800 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-05 01:06:27.499808 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:06:27.499815 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:27.499821 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:27.499832 | orchestrator | skipping: [testbed-manager] 2026-03-05 01:06:27.499836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:06:27.499841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:06:27.499845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:06:27.499849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:06:27.499852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:06:27.499857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-05 01:06:27.499866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:06:27.499870 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:06:27.499874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:06:27.499881 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:27.499885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:06:27.499889 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:06:27.499893 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:06:27.499897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:06:27.499901 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-05 01:06:27.499904 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:06:27.500250 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-05 01:06:27.500265 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:06:27.500270 | orchestrator | 2026-03-05 01:06:27.500274 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-05 01:06:27.500278 | orchestrator | Thursday 05 March 2026 01:02:37 +0000 (0:00:03.269) 0:00:25.374 ******** 2026-03-05 01:06:27.500288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.500292 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-05 01:06:27.500297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.500300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.500304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.500309 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.500317 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.500328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.500332 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.500336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.500340 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.500459 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.500466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.500473 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.500486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.500519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.500526 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.500532 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.500539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.500545 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.500552 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.500558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.500577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.500584 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.500591 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-05 01:06:27.500598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.500605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.500612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.500619 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.500630 | orchestrator | 2026-03-05 01:06:27.500636 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-05 01:06:27.500643 | orchestrator | Thursday 05 March 2026 01:02:44 +0000 (0:00:07.128) 0:00:32.502 ******** 2026-03-05 01:06:27.500649 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-05 01:06:27.500656 | orchestrator | 2026-03-05 01:06:27.500662 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-05 01:06:27.500672 | orchestrator | Thursday 05 March 2026 01:02:45 +0000 (0:00:01.728) 0:00:34.231 ******** 2026-03-05 01:06:27.500683 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097031, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9718897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500692 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097031, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9718897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500699 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097031, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9718897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500705 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097031, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9718897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500712 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097031, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9718897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500719 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097031, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9718897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500738 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1097046, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9771287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500750 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1097046, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9771287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500758 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1097046, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9771287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500765 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1097046, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9771287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500773 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097031, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9718897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:06:27.500780 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1097046, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9771287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500787 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1097028, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9695556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500800 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1097028, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9695556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500811 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1097046, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9771287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500818 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1097028, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9695556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500825 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1097028, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9695556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500832 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097039, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9753087, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500839 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097039, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9753087, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500853 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1097028, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9695556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500862 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097025, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9684532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500869 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097025, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9684532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500874 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097039, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9753087, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500879 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097039, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9753087, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500883 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097039, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9753087, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500888 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1097046, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9771287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:06:27.500909 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097032, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9730265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500914 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1097028, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9695556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500925 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097032, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9730265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500931 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097025, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9684532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500935 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097039, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9753087, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500940 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1097038, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9746397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500944 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097025, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9684532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500952 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097025, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9684532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500957 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097032, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9730265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500967 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1097038, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9746397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500971 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097032, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9730265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500976 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097025, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9684532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500980 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1097038, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9746397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500985 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097032, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9730265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500993 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097032, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9730265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.500998 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1097038, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9746397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501009 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1097038, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9746397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501014 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097035, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9730265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501019 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097035, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9730265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501023 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097035, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9730265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501031 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097035, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9730265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501036 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097035, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9730265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501040 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097030, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.970063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501051 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1097028, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9695556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:06:27.501056 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097030, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.970063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501061 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1097038, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9746397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501066 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097030, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.970063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501092 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097045, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.976837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501098 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097030, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.970063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501102 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097030, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.970063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501115 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097045, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.976837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501120 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097045, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.976837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501125 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097045, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.976837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501130 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097035, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9730265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501138 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097021, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9677906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501142 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097021, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9677906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501146 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097045, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.976837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501156 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097058, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9788845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501160 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097030, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.970063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501164 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097021, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9677906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501171 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097021, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9677906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501175 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097045, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.976837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501179 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097043, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9764836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501183 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097058, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9788845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501193 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097058, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9788845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501197 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097021, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9677906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501201 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097043, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9764836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501208 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097058, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9788845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501212 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097021, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9677906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501216 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097027, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9692285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501220 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097027, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9692285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501231 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097058, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9788845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501235 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097039, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9753087, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:06:27.501239 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097043, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9764836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501246 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097043, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9764836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501250 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097043, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9764836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501254 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097058, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9788845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501258 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1097023, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9684532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501442 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097027, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9692285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501449 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1097023, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9684532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501453 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097027, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9692285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501461 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097027, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9692285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501465 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097043, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9764836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501469 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1097023, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9684532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501473 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1097023, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9684532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501481 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097037, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9743686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501486 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097037, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9743686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501493 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1097023, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9684532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501497 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097036, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9730632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501501 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097037, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9743686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501505 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097037, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9743686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501508 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097027, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9692285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501517 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097036, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9730632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501521 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097025, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9684532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:06:27.501528 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097056, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9785414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501532 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:27.501536 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097037, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9743686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501540 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097036, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9730632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501544 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097036, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9730632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501548 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097056, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9785414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501552 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:06:27.501561 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1097023, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9684532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501565 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097056, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9785414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501573 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:06:27.501577 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097036, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9730632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501581 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097056, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9785414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501585 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:06:27.501589 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097056, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9785414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501593 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:27.501597 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097032, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9730265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:06:27.501601 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097037, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9743686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501609 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097036, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9730632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501617 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1097038, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9746397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:06:27.501621 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097056, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9785414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:06:27.501625 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:27.501629 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097035, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9730265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:06:27.501633 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097030, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.970063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:06:27.501636 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097045, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.976837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:06:27.501641 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097021, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9677906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:06:27.501649 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097058, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9788845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:06:27.501656 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097043, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9764836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:06:27.501660 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097027, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9692285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:06:27.501664 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1097023, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9684532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:06:27.501668 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097037, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9743686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:06:27.501672 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097036, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9730632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:06:27.501676 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097056, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9785414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:06:27.501687 | orchestrator | 2026-03-05 01:06:27.501691 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-05 01:06:27.501695 | orchestrator | Thursday 05 March 2026 01:03:34 +0000 (0:00:48.388) 0:01:22.619 ******** 2026-03-05 01:06:27.501699 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-05 01:06:27.501702 | orchestrator | 2026-03-05 01:06:27.501706 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-05 01:06:27.501712 | orchestrator | Thursday 05 March 2026 01:03:35 +0000 (0:00:01.120) 0:01:23.739 ******** 2026-03-05 01:06:27.501716 | orchestrator | [WARNING]: Skipped 2026-03-05 01:06:27.501721 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:06:27.501728 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-05 01:06:27.501732 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:06:27.501736 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-05 01:06:27.501740 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-05 01:06:27.501744 | orchestrator | [WARNING]: Skipped 2026-03-05 01:06:27.501747 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:06:27.501751 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-05 01:06:27.501755 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:06:27.501759 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-05 01:06:27.501763 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-05 01:06:27.501766 | orchestrator | [WARNING]: Skipped 2026-03-05 01:06:27.501770 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:06:27.501774 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-05 01:06:27.501778 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:06:27.501781 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-05 01:06:27.501785 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 01:06:27.501789 | orchestrator | [WARNING]: Skipped 2026-03-05 01:06:27.501793 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:06:27.501796 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-05 01:06:27.501800 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:06:27.501804 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-05 01:06:27.501808 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-05 01:06:27.501811 | orchestrator | [WARNING]: Skipped 2026-03-05 01:06:27.501815 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:06:27.501819 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-05 01:06:27.501823 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:06:27.501826 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-05 01:06:27.501830 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-05 01:06:27.501834 | orchestrator | [WARNING]: Skipped 2026-03-05 01:06:27.501838 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:06:27.501842 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-05 01:06:27.501845 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:06:27.501849 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-05 01:06:27.501853 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-05 01:06:27.501857 | orchestrator | [WARNING]: Skipped 2026-03-05 01:06:27.501860 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:06:27.501867 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-05 01:06:27.501871 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:06:27.501875 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-05 01:06:27.501879 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-05 01:06:27.501883 | orchestrator | 2026-03-05 01:06:27.501886 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-05 01:06:27.501890 | orchestrator | Thursday 05 March 2026 01:03:40 +0000 (0:00:04.647) 0:01:28.387 ******** 2026-03-05 01:06:27.501894 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-05 01:06:27.501898 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:27.501902 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-05 01:06:27.501905 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:27.501909 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-05 01:06:27.501913 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:27.501917 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-05 01:06:27.501921 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:06:27.501924 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-05 01:06:27.501928 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:06:27.501932 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-05 01:06:27.501936 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:06:27.501939 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-05 01:06:27.501943 | orchestrator | 2026-03-05 01:06:27.501947 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-05 01:06:27.501951 | orchestrator | Thursday 05 March 2026 01:04:08 +0000 (0:00:27.907) 0:01:56.294 ******** 2026-03-05 01:06:27.501955 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-05 01:06:27.501959 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-05 01:06:27.501964 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:27.501968 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:27.501972 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-05 01:06:27.501978 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:27.501982 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-05 01:06:27.501986 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:06:27.501990 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-05 01:06:27.501993 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:06:27.501997 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-05 01:06:27.502001 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:06:27.502005 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-05 01:06:27.502008 | orchestrator | 2026-03-05 01:06:27.502033 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-05 01:06:27.502039 | orchestrator | Thursday 05 March 2026 01:04:11 +0000 (0:00:03.689) 0:01:59.983 ******** 2026-03-05 01:06:27.502043 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-05 01:06:27.502048 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-05 01:06:27.502059 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-05 01:06:27.502065 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:27.502071 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:27.502099 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:27.502105 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-05 01:06:27.502111 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-05 01:06:27.502118 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:06:27.502124 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-05 01:06:27.502129 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:06:27.502136 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-05 01:06:27.502141 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:06:27.502147 | orchestrator | 2026-03-05 01:06:27.502153 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-05 01:06:27.502160 | orchestrator | Thursday 05 March 2026 01:04:14 +0000 (0:00:02.581) 0:02:02.565 ******** 2026-03-05 01:06:27.502166 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-05 01:06:27.502171 | orchestrator | 2026-03-05 01:06:27.502179 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-05 01:06:27.502186 | orchestrator | Thursday 05 March 2026 01:04:15 +0000 (0:00:01.421) 0:02:03.986 ******** 2026-03-05 01:06:27.502192 | orchestrator | skipping: [testbed-manager] 2026-03-05 01:06:27.502198 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:27.502204 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:27.502210 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:27.502216 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:06:27.502289 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:06:27.502296 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:06:27.502300 | orchestrator | 2026-03-05 01:06:27.502305 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-05 01:06:27.502310 | orchestrator | Thursday 05 March 2026 01:04:16 +0000 (0:00:00.792) 0:02:04.779 ******** 2026-03-05 01:06:27.502314 | orchestrator | skipping: [testbed-manager] 2026-03-05 01:06:27.502319 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:06:27.502324 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:06:27.502328 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:06:27.502333 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:06:27.502337 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:27.502342 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:06:27.502346 | orchestrator | 2026-03-05 01:06:27.502351 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-05 01:06:27.502355 | orchestrator | Thursday 05 March 2026 01:04:19 +0000 (0:00:03.474) 0:02:08.253 ******** 2026-03-05 01:06:27.502360 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-05 01:06:27.502365 | orchestrator | skipping: [testbed-manager] 2026-03-05 01:06:27.502369 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-05 01:06:27.502373 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:27.502378 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-05 01:06:27.502382 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:27.502387 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-05 01:06:27.502391 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:27.502401 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-05 01:06:27.502406 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:06:27.502415 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-05 01:06:27.502420 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:06:27.502429 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-05 01:06:27.502434 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:06:27.502439 | orchestrator | 2026-03-05 01:06:27.502442 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-05 01:06:27.502446 | orchestrator | Thursday 05 March 2026 01:04:23 +0000 (0:00:03.199) 0:02:11.453 ******** 2026-03-05 01:06:27.502451 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-05 01:06:27.502457 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:27.502463 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-05 01:06:27.502470 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:27.502474 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-05 01:06:27.502478 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:27.502482 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-05 01:06:27.502486 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:06:27.502489 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-05 01:06:27.502493 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:06:27.502497 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-05 01:06:27.502501 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-05 01:06:27.502505 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:06:27.502508 | orchestrator | 2026-03-05 01:06:27.502512 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-05 01:06:27.502516 | orchestrator | Thursday 05 March 2026 01:04:26 +0000 (0:00:02.822) 0:02:14.276 ******** 2026-03-05 01:06:27.502520 | orchestrator | [WARNING]: Skipped 2026-03-05 01:06:27.502524 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-05 01:06:27.502527 | orchestrator | due to this access issue: 2026-03-05 01:06:27.502531 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-05 01:06:27.502535 | orchestrator | not a directory 2026-03-05 01:06:27.502539 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-05 01:06:27.502542 | orchestrator | 2026-03-05 01:06:27.502546 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-05 01:06:27.502550 | orchestrator | Thursday 05 March 2026 01:04:27 +0000 (0:00:01.501) 0:02:15.777 ******** 2026-03-05 01:06:27.502554 | orchestrator | skipping: [testbed-manager] 2026-03-05 01:06:27.502557 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:27.502561 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:27.502565 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:27.502568 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:06:27.502572 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:06:27.502576 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:06:27.502580 | orchestrator | 2026-03-05 01:06:27.502583 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-05 01:06:27.502587 | orchestrator | Thursday 05 March 2026 01:04:28 +0000 (0:00:01.044) 0:02:16.822 ******** 2026-03-05 01:06:27.502591 | orchestrator | skipping: [testbed-manager] 2026-03-05 01:06:27.502595 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:27.502604 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:27.502608 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:27.502612 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:06:27.502616 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:06:27.502619 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:06:27.502623 | orchestrator | 2026-03-05 01:06:27.502627 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-05 01:06:27.502630 | orchestrator | Thursday 05 March 2026 01:04:29 +0000 (0:00:01.278) 0:02:18.100 ******** 2026-03-05 01:06:27.502635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.502639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.502650 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-05 01:06:27.502654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.502658 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.502663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.502667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.502674 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.502678 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.502684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.502693 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:06:27.502697 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.502701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.502705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.502712 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.502716 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.502720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.502727 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.502734 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.502738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.502742 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.502746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.502753 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.502757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:06:27.502767 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-05 01:06:27.502772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.502776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.502780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.502788 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:06:27.502791 | orchestrator | 2026-03-05 01:06:27.502795 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-05 01:06:27.502799 | orchestrator | Thursday 05 March 2026 01:04:34 +0000 (0:00:04.816) 0:02:22.916 ******** 2026-03-05 01:06:27.502803 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-05 01:06:27.502807 | orchestrator | skipping: [testbed-manager] 2026-03-05 01:06:27.502811 | orchestrator | 2026-03-05 01:06:27.502815 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-05 01:06:27.502818 | orchestrator | Thursday 05 March 2026 01:04:36 +0000 (0:00:01.615) 0:02:24.532 ******** 2026-03-05 01:06:27.502822 | orchestrator | 2026-03-05 01:06:27.502826 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-05 01:06:27.502830 | orchestrator | Thursday 05 March 2026 01:04:36 +0000 (0:00:00.080) 0:02:24.613 ******** 2026-03-05 01:06:27.502833 | orchestrator | 2026-03-05 01:06:27.502837 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-05 01:06:27.502841 | orchestrator | Thursday 05 March 2026 01:04:36 +0000 (0:00:00.083) 0:02:24.696 ******** 2026-03-05 01:06:27.502845 | orchestrator | 2026-03-05 01:06:27.502849 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-05 01:06:27.502852 | orchestrator | Thursday 05 March 2026 01:04:36 +0000 (0:00:00.096) 0:02:24.792 ******** 2026-03-05 01:06:27.502856 | orchestrator | 2026-03-05 01:06:27.502860 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-05 01:06:27.502864 | orchestrator | Thursday 05 March 2026 01:04:36 +0000 (0:00:00.301) 0:02:25.094 ******** 2026-03-05 01:06:27.502867 | orchestrator | 2026-03-05 01:06:27.502871 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-05 01:06:27.502875 | orchestrator | Thursday 05 March 2026 01:04:37 +0000 (0:00:00.221) 0:02:25.315 ******** 2026-03-05 01:06:27.502879 | orchestrator | 2026-03-05 01:06:27.502882 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-05 01:06:27.502886 | orchestrator | Thursday 05 March 2026 01:04:37 +0000 (0:00:00.097) 0:02:25.413 ******** 2026-03-05 01:06:27.502890 | orchestrator | 2026-03-05 01:06:27.502894 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-05 01:06:27.502898 | orchestrator | Thursday 05 March 2026 01:04:37 +0000 (0:00:00.103) 0:02:25.516 ******** 2026-03-05 01:06:27.502901 | orchestrator | changed: [testbed-manager] 2026-03-05 01:06:27.502905 | orchestrator | 2026-03-05 01:06:27.502909 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-05 01:06:27.502913 | orchestrator | Thursday 05 March 2026 01:04:54 +0000 (0:00:17.325) 0:02:42.842 ******** 2026-03-05 01:06:27.502916 | orchestrator | changed: [testbed-manager] 2026-03-05 01:06:27.502920 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:06:27.502926 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:06:27.502930 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:06:27.502933 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:27.502937 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:06:27.502944 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:06:27.502947 | orchestrator | 2026-03-05 01:06:27.502951 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-05 01:06:27.502955 | orchestrator | Thursday 05 March 2026 01:05:09 +0000 (0:00:14.880) 0:02:57.722 ******** 2026-03-05 01:06:27.502963 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:06:27.502966 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:06:27.502970 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:27.502974 | orchestrator | 2026-03-05 01:06:27.502978 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-05 01:06:27.502982 | orchestrator | Thursday 05 March 2026 01:05:15 +0000 (0:00:06.102) 0:03:03.824 ******** 2026-03-05 01:06:27.502985 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:27.502989 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:06:27.502993 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:06:27.502997 | orchestrator | 2026-03-05 01:06:27.503000 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-05 01:06:27.503004 | orchestrator | Thursday 05 March 2026 01:05:27 +0000 (0:00:12.254) 0:03:16.079 ******** 2026-03-05 01:06:27.503008 | orchestrator | changed: [testbed-manager] 2026-03-05 01:06:27.503012 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:06:27.503015 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:06:27.503019 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:06:27.503023 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:06:27.503027 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:27.503030 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:06:27.503034 | orchestrator | 2026-03-05 01:06:27.503038 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-05 01:06:27.503042 | orchestrator | Thursday 05 March 2026 01:05:47 +0000 (0:00:19.256) 0:03:35.335 ******** 2026-03-05 01:06:27.503045 | orchestrator | changed: [testbed-manager] 2026-03-05 01:06:27.503049 | orchestrator | 2026-03-05 01:06:27.503053 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-05 01:06:27.503057 | orchestrator | Thursday 05 March 2026 01:05:59 +0000 (0:00:12.709) 0:03:48.045 ******** 2026-03-05 01:06:27.503060 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:06:27.503064 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:27.503068 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:06:27.503071 | orchestrator | 2026-03-05 01:06:27.503089 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-05 01:06:27.503093 | orchestrator | Thursday 05 March 2026 01:06:13 +0000 (0:00:13.355) 0:04:01.401 ******** 2026-03-05 01:06:27.503097 | orchestrator | changed: [testbed-manager] 2026-03-05 01:06:27.503100 | orchestrator | 2026-03-05 01:06:27.503104 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-05 01:06:27.503108 | orchestrator | Thursday 05 March 2026 01:06:19 +0000 (0:00:06.116) 0:04:07.517 ******** 2026-03-05 01:06:27.503112 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:06:27.503115 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:06:27.503119 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:06:27.503123 | orchestrator | 2026-03-05 01:06:27.503127 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:06:27.503131 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-05 01:06:27.503135 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-05 01:06:27.503139 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-05 01:06:27.503143 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-05 01:06:27.503147 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-05 01:06:27.503151 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-05 01:06:27.503159 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-05 01:06:27.503163 | orchestrator | 2026-03-05 01:06:27.503166 | orchestrator | 2026-03-05 01:06:27.503170 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:06:27.503174 | orchestrator | Thursday 05 March 2026 01:06:26 +0000 (0:00:07.403) 0:04:14.920 ******** 2026-03-05 01:06:27.503178 | orchestrator | =============================================================================== 2026-03-05 01:06:27.503182 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 48.39s 2026-03-05 01:06:27.503185 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 27.91s 2026-03-05 01:06:27.503189 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 19.26s 2026-03-05 01:06:27.503193 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 17.33s 2026-03-05 01:06:27.503196 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.88s 2026-03-05 01:06:27.503200 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 13.36s 2026-03-05 01:06:27.503206 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 12.71s 2026-03-05 01:06:27.503210 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 12.25s 2026-03-05 01:06:27.503216 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 8.01s 2026-03-05 01:06:27.503220 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 7.40s 2026-03-05 01:06:27.503223 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.13s 2026-03-05 01:06:27.503227 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 6.12s 2026-03-05 01:06:27.503231 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 6.10s 2026-03-05 01:06:27.503234 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 5.22s 2026-03-05 01:06:27.503238 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.82s 2026-03-05 01:06:27.503242 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 4.65s 2026-03-05 01:06:27.503246 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.69s 2026-03-05 01:06:27.503249 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.48s 2026-03-05 01:06:27.503253 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 3.27s 2026-03-05 01:06:27.503257 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 3.20s 2026-03-05 01:06:27.503260 | orchestrator | 2026-03-05 01:06:27 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:06:27.503264 | orchestrator | 2026-03-05 01:06:27 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:06:27.503268 | orchestrator | 2026-03-05 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:30.541890 | orchestrator | 2026-03-05 01:06:30 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:06:30.543128 | orchestrator | 2026-03-05 01:06:30 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:06:30.544411 | orchestrator | 2026-03-05 01:06:30 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:06:30.545501 | orchestrator | 2026-03-05 01:06:30 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:06:30.545540 | orchestrator | 2026-03-05 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:33.589156 | orchestrator | 2026-03-05 01:06:33 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:06:33.590712 | orchestrator | 2026-03-05 01:06:33 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:06:33.592863 | orchestrator | 2026-03-05 01:06:33 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:06:33.594791 | orchestrator | 2026-03-05 01:06:33 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:06:33.594842 | orchestrator | 2026-03-05 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:36.631574 | orchestrator | 2026-03-05 01:06:36 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:06:36.633666 | orchestrator | 2026-03-05 01:06:36 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:06:36.635584 | orchestrator | 2026-03-05 01:06:36 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:06:36.636673 | orchestrator | 2026-03-05 01:06:36 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:06:36.636697 | orchestrator | 2026-03-05 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:39.678868 | orchestrator | 2026-03-05 01:06:39 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:06:39.678952 | orchestrator | 2026-03-05 01:06:39 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:06:39.680127 | orchestrator | 2026-03-05 01:06:39 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:06:39.681541 | orchestrator | 2026-03-05 01:06:39 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:06:39.681600 | orchestrator | 2026-03-05 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:42.718955 | orchestrator | 2026-03-05 01:06:42 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:06:42.720839 | orchestrator | 2026-03-05 01:06:42 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:06:42.720995 | orchestrator | 2026-03-05 01:06:42 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:06:42.722320 | orchestrator | 2026-03-05 01:06:42 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:06:42.722372 | orchestrator | 2026-03-05 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:45.761431 | orchestrator | 2026-03-05 01:06:45 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:06:45.762489 | orchestrator | 2026-03-05 01:06:45 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:06:45.763936 | orchestrator | 2026-03-05 01:06:45 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:06:45.766657 | orchestrator | 2026-03-05 01:06:45 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:06:45.766743 | orchestrator | 2026-03-05 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:48.809581 | orchestrator | 2026-03-05 01:06:48 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:06:48.810800 | orchestrator | 2026-03-05 01:06:48 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:06:48.812446 | orchestrator | 2026-03-05 01:06:48 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:06:48.813375 | orchestrator | 2026-03-05 01:06:48 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:06:48.813444 | orchestrator | 2026-03-05 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:51.852873 | orchestrator | 2026-03-05 01:06:51 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:06:51.853924 | orchestrator | 2026-03-05 01:06:51 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:06:51.855194 | orchestrator | 2026-03-05 01:06:51 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:06:51.857308 | orchestrator | 2026-03-05 01:06:51 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:06:51.857370 | orchestrator | 2026-03-05 01:06:51 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:54.890421 | orchestrator | 2026-03-05 01:06:54 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:06:54.890959 | orchestrator | 2026-03-05 01:06:54 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:06:54.892176 | orchestrator | 2026-03-05 01:06:54 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:06:54.893564 | orchestrator | 2026-03-05 01:06:54 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:06:54.893602 | orchestrator | 2026-03-05 01:06:54 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:57.935911 | orchestrator | 2026-03-05 01:06:57 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:06:57.936448 | orchestrator | 2026-03-05 01:06:57 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:06:57.938233 | orchestrator | 2026-03-05 01:06:57 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:06:57.940040 | orchestrator | 2026-03-05 01:06:57 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:06:57.940100 | orchestrator | 2026-03-05 01:06:57 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:00.983027 | orchestrator | 2026-03-05 01:07:00 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:07:00.985549 | orchestrator | 2026-03-05 01:07:00 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:07:00.987971 | orchestrator | 2026-03-05 01:07:00 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:07:00.990131 | orchestrator | 2026-03-05 01:07:00 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:07:00.990438 | orchestrator | 2026-03-05 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:04.036026 | orchestrator | 2026-03-05 01:07:04 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:07:04.037159 | orchestrator | 2026-03-05 01:07:04 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:07:04.038156 | orchestrator | 2026-03-05 01:07:04 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:07:04.039274 | orchestrator | 2026-03-05 01:07:04 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:07:04.039318 | orchestrator | 2026-03-05 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:07.081708 | orchestrator | 2026-03-05 01:07:07 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:07:07.082445 | orchestrator | 2026-03-05 01:07:07 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:07:07.083338 | orchestrator | 2026-03-05 01:07:07 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:07:07.084254 | orchestrator | 2026-03-05 01:07:07 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:07:07.084296 | orchestrator | 2026-03-05 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:10.121860 | orchestrator | 2026-03-05 01:07:10 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:07:10.122736 | orchestrator | 2026-03-05 01:07:10 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:07:10.123760 | orchestrator | 2026-03-05 01:07:10 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:07:10.124745 | orchestrator | 2026-03-05 01:07:10 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:07:10.124773 | orchestrator | 2026-03-05 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:13.160318 | orchestrator | 2026-03-05 01:07:13 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:07:13.161694 | orchestrator | 2026-03-05 01:07:13 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:07:13.163002 | orchestrator | 2026-03-05 01:07:13 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:07:13.164841 | orchestrator | 2026-03-05 01:07:13 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:07:13.164947 | orchestrator | 2026-03-05 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:16.215300 | orchestrator | 2026-03-05 01:07:16 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:07:16.216373 | orchestrator | 2026-03-05 01:07:16 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:07:16.218074 | orchestrator | 2026-03-05 01:07:16 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:07:16.219948 | orchestrator | 2026-03-05 01:07:16 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:07:16.219998 | orchestrator | 2026-03-05 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:19.254604 | orchestrator | 2026-03-05 01:07:19 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:07:19.255181 | orchestrator | 2026-03-05 01:07:19 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:07:19.257183 | orchestrator | 2026-03-05 01:07:19 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:07:19.258260 | orchestrator | 2026-03-05 01:07:19 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:07:19.258296 | orchestrator | 2026-03-05 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:22.282159 | orchestrator | 2026-03-05 01:07:22 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:07:22.283036 | orchestrator | 2026-03-05 01:07:22 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:07:22.284926 | orchestrator | 2026-03-05 01:07:22 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:07:22.285801 | orchestrator | 2026-03-05 01:07:22 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:07:22.285825 | orchestrator | 2026-03-05 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:25.312756 | orchestrator | 2026-03-05 01:07:25 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:07:25.313365 | orchestrator | 2026-03-05 01:07:25 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:07:25.314230 | orchestrator | 2026-03-05 01:07:25 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:07:25.315223 | orchestrator | 2026-03-05 01:07:25 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:07:25.315266 | orchestrator | 2026-03-05 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:28.352513 | orchestrator | 2026-03-05 01:07:28 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:07:28.353355 | orchestrator | 2026-03-05 01:07:28 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:07:28.354664 | orchestrator | 2026-03-05 01:07:28 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:07:28.355590 | orchestrator | 2026-03-05 01:07:28 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:07:28.355614 | orchestrator | 2026-03-05 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:31.400839 | orchestrator | 2026-03-05 01:07:31 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:07:31.401660 | orchestrator | 2026-03-05 01:07:31 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:07:31.402720 | orchestrator | 2026-03-05 01:07:31 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:07:31.404368 | orchestrator | 2026-03-05 01:07:31 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:07:31.404412 | orchestrator | 2026-03-05 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:34.435824 | orchestrator | 2026-03-05 01:07:34 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:07:34.436432 | orchestrator | 2026-03-05 01:07:34 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:07:34.437466 | orchestrator | 2026-03-05 01:07:34 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:07:34.438540 | orchestrator | 2026-03-05 01:07:34 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:07:34.438604 | orchestrator | 2026-03-05 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:37.487224 | orchestrator | 2026-03-05 01:07:37 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:07:37.489468 | orchestrator | 2026-03-05 01:07:37 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:07:37.490109 | orchestrator | 2026-03-05 01:07:37 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:07:37.490801 | orchestrator | 2026-03-05 01:07:37 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:07:37.490815 | orchestrator | 2026-03-05 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:40.516805 | orchestrator | 2026-03-05 01:07:40 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:07:40.517309 | orchestrator | 2026-03-05 01:07:40 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:07:40.518747 | orchestrator | 2026-03-05 01:07:40 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:07:40.519482 | orchestrator | 2026-03-05 01:07:40 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:07:40.519528 | orchestrator | 2026-03-05 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:43.553338 | orchestrator | 2026-03-05 01:07:43 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:07:43.553690 | orchestrator | 2026-03-05 01:07:43 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:07:43.554690 | orchestrator | 2026-03-05 01:07:43 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:07:43.555501 | orchestrator | 2026-03-05 01:07:43 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:07:43.555623 | orchestrator | 2026-03-05 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:46.582927 | orchestrator | 2026-03-05 01:07:46 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:07:46.584503 | orchestrator | 2026-03-05 01:07:46 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:07:46.585181 | orchestrator | 2026-03-05 01:07:46 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:07:46.586089 | orchestrator | 2026-03-05 01:07:46 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:07:46.586144 | orchestrator | 2026-03-05 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:49.618553 | orchestrator | 2026-03-05 01:07:49 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:07:49.623826 | orchestrator | 2026-03-05 01:07:49 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:07:49.625839 | orchestrator | 2026-03-05 01:07:49 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:07:49.628247 | orchestrator | 2026-03-05 01:07:49 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:07:49.628318 | orchestrator | 2026-03-05 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:52.682079 | orchestrator | 2026-03-05 01:07:52 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:07:52.682748 | orchestrator | 2026-03-05 01:07:52 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:07:52.683949 | orchestrator | 2026-03-05 01:07:52 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:07:52.685545 | orchestrator | 2026-03-05 01:07:52 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:07:52.685578 | orchestrator | 2026-03-05 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:55.722118 | orchestrator | 2026-03-05 01:07:55 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:07:55.722824 | orchestrator | 2026-03-05 01:07:55 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:07:55.724209 | orchestrator | 2026-03-05 01:07:55 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:07:55.724954 | orchestrator | 2026-03-05 01:07:55 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:07:55.724982 | orchestrator | 2026-03-05 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:58.758815 | orchestrator | 2026-03-05 01:07:58 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:07:58.758950 | orchestrator | 2026-03-05 01:07:58 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:07:58.759931 | orchestrator | 2026-03-05 01:07:58 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:07:58.760734 | orchestrator | 2026-03-05 01:07:58 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:07:58.760809 | orchestrator | 2026-03-05 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:01.789848 | orchestrator | 2026-03-05 01:08:01 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:08:01.790224 | orchestrator | 2026-03-05 01:08:01 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:08:01.791258 | orchestrator | 2026-03-05 01:08:01 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:08:01.792113 | orchestrator | 2026-03-05 01:08:01 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:08:01.792153 | orchestrator | 2026-03-05 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:04.817605 | orchestrator | 2026-03-05 01:08:04 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:08:04.819057 | orchestrator | 2026-03-05 01:08:04 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:08:04.820872 | orchestrator | 2026-03-05 01:08:04 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:08:04.821735 | orchestrator | 2026-03-05 01:08:04 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:08:04.821863 | orchestrator | 2026-03-05 01:08:04 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:07.922806 | orchestrator | 2026-03-05 01:08:07 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:08:07.924730 | orchestrator | 2026-03-05 01:08:07 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:08:07.929184 | orchestrator | 2026-03-05 01:08:07 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:08:07.932129 | orchestrator | 2026-03-05 01:08:07 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:08:07.932186 | orchestrator | 2026-03-05 01:08:07 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:10.971736 | orchestrator | 2026-03-05 01:08:10 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:08:10.974132 | orchestrator | 2026-03-05 01:08:10 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:08:10.976087 | orchestrator | 2026-03-05 01:08:10 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:08:10.977241 | orchestrator | 2026-03-05 01:08:10 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:08:10.977279 | orchestrator | 2026-03-05 01:08:10 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:14.024557 | orchestrator | 2026-03-05 01:08:14 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:08:14.025600 | orchestrator | 2026-03-05 01:08:14 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:08:14.026422 | orchestrator | 2026-03-05 01:08:14 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:08:14.027703 | orchestrator | 2026-03-05 01:08:14 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:08:14.027866 | orchestrator | 2026-03-05 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:17.113133 | orchestrator | 2026-03-05 01:08:17 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:08:17.113212 | orchestrator | 2026-03-05 01:08:17 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:08:17.113493 | orchestrator | 2026-03-05 01:08:17 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:08:17.114319 | orchestrator | 2026-03-05 01:08:17 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:08:17.114345 | orchestrator | 2026-03-05 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:20.146685 | orchestrator | 2026-03-05 01:08:20 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:08:20.147465 | orchestrator | 2026-03-05 01:08:20 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:08:20.148595 | orchestrator | 2026-03-05 01:08:20 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:08:20.149786 | orchestrator | 2026-03-05 01:08:20 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:08:20.149835 | orchestrator | 2026-03-05 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:23.184464 | orchestrator | 2026-03-05 01:08:23 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:08:23.185551 | orchestrator | 2026-03-05 01:08:23 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:08:23.186866 | orchestrator | 2026-03-05 01:08:23 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:08:23.188486 | orchestrator | 2026-03-05 01:08:23 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:08:23.188524 | orchestrator | 2026-03-05 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:26.248883 | orchestrator | 2026-03-05 01:08:26 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:08:26.249547 | orchestrator | 2026-03-05 01:08:26 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:08:26.250833 | orchestrator | 2026-03-05 01:08:26 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:08:26.252205 | orchestrator | 2026-03-05 01:08:26 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:08:26.252253 | orchestrator | 2026-03-05 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:29.293095 | orchestrator | 2026-03-05 01:08:29 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:08:29.293961 | orchestrator | 2026-03-05 01:08:29 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:08:29.294929 | orchestrator | 2026-03-05 01:08:29 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:08:29.295995 | orchestrator | 2026-03-05 01:08:29 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:08:29.296051 | orchestrator | 2026-03-05 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:32.336214 | orchestrator | 2026-03-05 01:08:32 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:08:32.336614 | orchestrator | 2026-03-05 01:08:32 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:08:32.337649 | orchestrator | 2026-03-05 01:08:32 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:08:32.338209 | orchestrator | 2026-03-05 01:08:32 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:08:32.338514 | orchestrator | 2026-03-05 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:35.367815 | orchestrator | 2026-03-05 01:08:35 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:08:35.370221 | orchestrator | 2026-03-05 01:08:35 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:08:35.371085 | orchestrator | 2026-03-05 01:08:35 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:08:35.374068 | orchestrator | 2026-03-05 01:08:35 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:08:35.374112 | orchestrator | 2026-03-05 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:38.398261 | orchestrator | 2026-03-05 01:08:38 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:08:38.399039 | orchestrator | 2026-03-05 01:08:38 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:08:38.400272 | orchestrator | 2026-03-05 01:08:38 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:08:38.401315 | orchestrator | 2026-03-05 01:08:38 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:08:38.401350 | orchestrator | 2026-03-05 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:41.436052 | orchestrator | 2026-03-05 01:08:41 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:08:41.436748 | orchestrator | 2026-03-05 01:08:41 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:08:41.438071 | orchestrator | 2026-03-05 01:08:41 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:08:41.439133 | orchestrator | 2026-03-05 01:08:41 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:08:41.439157 | orchestrator | 2026-03-05 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:44.466871 | orchestrator | 2026-03-05 01:08:44 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:08:44.467409 | orchestrator | 2026-03-05 01:08:44 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:08:44.468155 | orchestrator | 2026-03-05 01:08:44 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:08:44.468816 | orchestrator | 2026-03-05 01:08:44 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:08:44.468837 | orchestrator | 2026-03-05 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:47.500645 | orchestrator | 2026-03-05 01:08:47 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state STARTED 2026-03-05 01:08:47.501222 | orchestrator | 2026-03-05 01:08:47 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state STARTED 2026-03-05 01:08:47.501962 | orchestrator | 2026-03-05 01:08:47 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:08:47.505281 | orchestrator | 2026-03-05 01:08:47 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:08:47.505351 | orchestrator | 2026-03-05 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:10:50.651611 | orchestrator | 2026-03-05 01:10:50 | INFO  | Task e38e2d08-9749-4391-972a-a00dd34b45f0 is in state SUCCESS 2026-03-05 01:10:50.655166 | orchestrator | 2026-03-05 01:10:50.655281 | orchestrator | 2026-03-05 01:10:50.655292 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:10:50.655301 | orchestrator | 2026-03-05 01:10:50.655333 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:10:50.655341 | orchestrator | Thursday 05 March 2026 01:06:32 +0000 (0:00:00.291) 0:00:00.291 ******** 2026-03-05 01:10:50.655348 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:10:50.655356 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:10:50.655364 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:10:50.655371 | orchestrator | 2026-03-05 01:10:50.655377 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:10:50.655384 | orchestrator | Thursday 05 March 2026 01:06:32 +0000 (0:00:00.322) 0:00:00.614 ******** 2026-03-05 01:10:50.655391 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-05 01:10:50.655398 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-05 01:10:50.655405 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-05 01:10:50.655411 | orchestrator | 2026-03-05 01:10:50.655418 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-05 01:10:50.655426 | orchestrator | 2026-03-05 01:10:50.655450 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-05 01:10:50.655457 | orchestrator | Thursday 05 March 2026 01:06:33 +0000 (0:00:00.498) 0:00:01.112 ******** 2026-03-05 01:10:50.655465 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:10:50.655473 | orchestrator | 2026-03-05 01:10:50.655480 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-05 01:10:50.655486 | orchestrator | Thursday 05 March 2026 01:06:33 +0000 (0:00:00.607) 0:00:01.719 ******** 2026-03-05 01:10:50.655493 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-05 01:10:50.655500 | orchestrator | 2026-03-05 01:10:50.655507 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-05 01:10:50.655514 | orchestrator | Thursday 05 March 2026 01:06:37 +0000 (0:00:03.926) 0:00:05.646 ******** 2026-03-05 01:10:50.655522 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-05 01:10:50.655529 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-05 01:10:50.655536 | orchestrator | 2026-03-05 01:10:50.655543 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-05 01:10:50.655550 | orchestrator | Thursday 05 March 2026 01:06:45 +0000 (0:00:07.607) 0:00:13.253 ******** 2026-03-05 01:10:50.655556 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-05 01:10:50.655563 | orchestrator | 2026-03-05 01:10:50.655569 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-05 01:10:50.655745 | orchestrator | Thursday 05 March 2026 01:06:49 +0000 (0:00:03.670) 0:00:16.924 ******** 2026-03-05 01:10:50.655750 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-05 01:10:50.655754 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-05 01:10:50.655957 | orchestrator | 2026-03-05 01:10:50.655972 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-05 01:10:50.655979 | orchestrator | Thursday 05 March 2026 01:06:53 +0000 (0:00:04.370) 0:00:21.295 ******** 2026-03-05 01:10:50.655986 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-05 01:10:50.655993 | orchestrator | 2026-03-05 01:10:50.655999 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-05 01:10:50.656005 | orchestrator | Thursday 05 March 2026 01:06:57 +0000 (0:00:03.849) 0:00:25.144 ******** 2026-03-05 01:10:50.656011 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-05 01:10:50.656018 | orchestrator | 2026-03-05 01:10:50.656024 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-05 01:10:50.656031 | orchestrator | Thursday 05 March 2026 01:07:01 +0000 (0:00:04.361) 0:00:29.506 ******** 2026-03-05 01:10:50.656042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:10:50.656116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:10:50.656134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:10:50.656142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656279 | orchestrator | 2026-03-05 01:10:50.656284 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-05 01:10:50.656291 | orchestrator | Thursday 05 March 2026 01:07:06 +0000 (0:00:04.846) 0:00:34.352 ******** 2026-03-05 01:10:50.656303 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:10:50.656309 | orchestrator | 2026-03-05 01:10:50.656316 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-05 01:10:50.656322 | orchestrator | Thursday 05 March 2026 01:07:06 +0000 (0:00:00.302) 0:00:34.655 ******** 2026-03-05 01:10:50.656351 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:10:50.656358 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:10:50.656364 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:10:50.656370 | orchestrator | 2026-03-05 01:10:50.656376 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-05 01:10:50.656392 | orchestrator | Thursday 05 March 2026 01:07:07 +0000 (0:00:00.785) 0:00:35.441 ******** 2026-03-05 01:10:50.656400 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:10:50.656407 | orchestrator | 2026-03-05 01:10:50.656414 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-05 01:10:50.656421 | orchestrator | Thursday 05 March 2026 01:07:08 +0000 (0:00:01.280) 0:00:36.721 ******** 2026-03-05 01:10:50.656428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:10:50.656448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:10:50.656460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:10:50.656467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.656651 | orchestrator | 2026-03-05 01:10:50.656658 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-05 01:10:50.656776 | orchestrator | Thursday 05 March 2026 01:07:16 +0000 (0:00:07.330) 0:00:44.052 ******** 2026-03-05 01:10:50.656785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:10:50.656792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 01:10:50.656883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.656899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.656911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.656915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.656953 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:10:50.656958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:10:50.656963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 01:10:50.657460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.657533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.657553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.657560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.657567 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:10:50.657575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:10:50.657583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 01:10:50.657601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.657612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.657643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.657650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.657656 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:10:50.657663 | orchestrator | 2026-03-05 01:10:50.657670 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-05 01:10:50.657677 | orchestrator | Thursday 05 March 2026 01:07:17 +0000 (0:00:01.809) 0:00:45.862 ******** 2026-03-05 01:10:50.657684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:10:50.657691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 01:10:50.657703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.657711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.657726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.657733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.657740 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:10:50.657747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:10:50.657754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 01:10:50.657765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.657771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.657788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.657795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.657801 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:10:50.657809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:10:50.657816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 01:10:50.657908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.657961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.657985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.657992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.657998 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:10:50.658004 | orchestrator | 2026-03-05 01:10:50.658010 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-05 01:10:50.658181 | orchestrator | Thursday 05 March 2026 01:07:20 +0000 (0:00:02.508) 0:00:48.370 ******** 2026-03-05 01:10:50.658192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:10:50.658199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:10:50.658218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:10:50.658240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658376 | orchestrator | 2026-03-05 01:10:50.658383 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-05 01:10:50.658390 | orchestrator | Thursday 05 March 2026 01:07:28 +0000 (0:00:08.451) 0:00:56.822 ******** 2026-03-05 01:10:50.658397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:10:50.658404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:10:50.658411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:10:50.658450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658590 | orchestrator | 2026-03-05 01:10:50.658597 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-05 01:10:50.658631 | orchestrator | Thursday 05 March 2026 01:08:01 +0000 (0:00:32.869) 0:01:29.691 ******** 2026-03-05 01:10:50.658657 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-05 01:10:50.658680 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-05 01:10:50.658687 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-05 01:10:50.658694 | orchestrator | 2026-03-05 01:10:50.658700 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-05 01:10:50.658706 | orchestrator | Thursday 05 March 2026 01:08:10 +0000 (0:00:08.974) 0:01:38.666 ******** 2026-03-05 01:10:50.658713 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-05 01:10:50.658719 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-05 01:10:50.658726 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-05 01:10:50.658731 | orchestrator | 2026-03-05 01:10:50.658737 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-05 01:10:50.658744 | orchestrator | Thursday 05 March 2026 01:08:14 +0000 (0:00:03.944) 0:01:42.611 ******** 2026-03-05 01:10:50.658751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:10:50.658765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:10:50.658787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:10:50.658798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.658812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.658819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.658831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.658848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.658858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.658865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.658885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.658904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.658933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.658960 | orchestrator | 2026-03-05 01:10:50.658966 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-05 01:10:50.658972 | orchestrator | Thursday 05 March 2026 01:08:18 +0000 (0:00:03.623) 0:01:46.235 ******** 2026-03-05 01:10:50.658978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:10:50.658990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:10:50.658996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:10:50.659007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:10:50.659019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:10:50.659025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.659034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.659039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.659043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.659049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:10:50.659053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.659060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.659064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.659073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.659077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.659081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.659088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.659092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.659096 | orchestrator | 2026-03-05 01:10:50.659100 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-05 01:10:50.659106 | orchestrator | Thursday 05 March 2026 01:08:23 +0000 (0:00:05.063) 0:01:51.298 ******** 2026-03-05 01:10:50.659110 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:10:50.659114 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:10:50.659118 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:10:50.659122 | orchestrator | 2026-03-05 01:10:50.659126 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-05 01:10:50.659134 | orchestrator | Thursday 05 March 2026 01:08:24 +0000 (0:00:01.489) 0:01:52.788 ******** 2026-03-05 01:10:50.659140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:10:50.659147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 01:10:50.659154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.659159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.659172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.659182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.659194 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:10:50.659201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:10:50.659207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 01:10:50.659211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.659214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.659224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.659228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.659236 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:10:50.659246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:10:50.659254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 01:10:50.659265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.659271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.659277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.659288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.659299 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:10:50.659305 | orchestrator | 2026-03-05 01:10:50.659311 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-05 01:10:50.659316 | orchestrator | Thursday 05 March 2026 01:08:28 +0000 (0:00:03.354) 0:01:56.142 ******** 2026-03-05 01:10:50.659328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:10:50.659335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:10:50.659342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:10:50.659348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:10:50.659359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:10:50.659376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:10:50.659384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.659390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.659396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.659403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.659414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['2026-03-05 01:10:50 | INFO  | Task a33bac03-d70d-421e-93a6-bd4d68696955 is in state SUCCESS 2026-03-05 01:10:50.659429 | orchestrator | CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.659436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.659446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.659453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.659460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.659466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.659472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.659484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.659494 | orchestrator | 2026-03-05 01:10:50.659500 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-05 01:10:50.659505 | orchestrator | Thursday 05 March 2026 01:08:34 +0000 (0:00:06.408) 0:02:02.550 ******** 2026-03-05 01:10:50.659512 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:10:50.659517 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:10:50.659522 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:10:50.659528 | orchestrator | 2026-03-05 01:10:50.659533 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-05 01:10:50.659539 | orchestrator | Thursday 05 March 2026 01:08:35 +0000 (0:00:00.527) 0:02:03.077 ******** 2026-03-05 01:10:50.659545 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-05 01:10:50.659551 | orchestrator | 2026-03-05 01:10:50.659561 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-05 01:10:50.659567 | orchestrator | Thursday 05 March 2026 01:08:38 +0000 (0:00:03.219) 0:02:06.297 ******** 2026-03-05 01:10:50.659574 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-05 01:10:50.659581 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-05 01:10:50.659586 | orchestrator | 2026-03-05 01:10:50.659592 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-05 01:10:50.659598 | orchestrator | Thursday 05 March 2026 01:08:41 +0000 (0:00:03.304) 0:02:09.601 ******** 2026-03-05 01:10:50.659603 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:10:50.659609 | orchestrator | 2026-03-05 01:10:50.659616 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-05 01:10:50.659622 | orchestrator | Thursday 05 March 2026 01:09:02 +0000 (0:00:20.838) 0:02:30.439 ******** 2026-03-05 01:10:50.659628 | orchestrator | 2026-03-05 01:10:50.659633 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-05 01:10:50.659638 | orchestrator | Thursday 05 March 2026 01:09:02 +0000 (0:00:00.083) 0:02:30.523 ******** 2026-03-05 01:10:50.659645 | orchestrator | 2026-03-05 01:10:50.659651 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-05 01:10:50.659658 | orchestrator | Thursday 05 March 2026 01:09:02 +0000 (0:00:00.096) 0:02:30.620 ******** 2026-03-05 01:10:50.659664 | orchestrator | 2026-03-05 01:10:50.659670 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-05 01:10:50.659676 | orchestrator | Thursday 05 March 2026 01:09:02 +0000 (0:00:00.078) 0:02:30.699 ******** 2026-03-05 01:10:50.659681 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:10:50.659688 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:10:50.659694 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:10:50.659700 | orchestrator | 2026-03-05 01:10:50.659706 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-05 01:10:50.659712 | orchestrator | Thursday 05 March 2026 01:09:21 +0000 (0:00:19.078) 0:02:49.777 ******** 2026-03-05 01:10:50.659719 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:10:50.659725 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:10:50.659731 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:10:50.659737 | orchestrator | 2026-03-05 01:10:50.659743 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-05 01:10:50.659750 | orchestrator | Thursday 05 March 2026 01:09:36 +0000 (0:00:14.270) 0:03:04.048 ******** 2026-03-05 01:10:50.659756 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:10:50.659763 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:10:50.659775 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:10:50.659779 | orchestrator | 2026-03-05 01:10:50.659783 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-05 01:10:50.659787 | orchestrator | Thursday 05 March 2026 01:09:49 +0000 (0:00:12.990) 0:03:17.039 ******** 2026-03-05 01:10:50.659791 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:10:50.659795 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:10:50.659798 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:10:50.659802 | orchestrator | 2026-03-05 01:10:50.659806 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-05 01:10:50.659811 | orchestrator | Thursday 05 March 2026 01:10:01 +0000 (0:00:11.897) 0:03:28.936 ******** 2026-03-05 01:10:50.659816 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:10:50.659822 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:10:50.659828 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:10:50.659834 | orchestrator | 2026-03-05 01:10:50.659840 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-05 01:10:50.659847 | orchestrator | Thursday 05 March 2026 01:10:14 +0000 (0:00:13.288) 0:03:42.225 ******** 2026-03-05 01:10:50.659854 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:10:50.659860 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:10:50.659866 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:10:50.659873 | orchestrator | 2026-03-05 01:10:50.659879 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-05 01:10:50.659885 | orchestrator | Thursday 05 March 2026 01:10:24 +0000 (0:00:09.899) 0:03:52.125 ******** 2026-03-05 01:10:50.659891 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:10:50.659898 | orchestrator | 2026-03-05 01:10:50.659904 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:10:50.659910 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 01:10:50.659958 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-05 01:10:50.659975 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-05 01:10:50.659982 | orchestrator | 2026-03-05 01:10:50.659988 | orchestrator | 2026-03-05 01:10:50.659995 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:10:50.660002 | orchestrator | Thursday 05 March 2026 01:10:32 +0000 (0:00:08.657) 0:04:00.782 ******** 2026-03-05 01:10:50.660009 | orchestrator | =============================================================================== 2026-03-05 01:10:50.660016 | orchestrator | designate : Copying over designate.conf -------------------------------- 32.87s 2026-03-05 01:10:50.660022 | orchestrator | designate : Running Designate bootstrap container ---------------------- 20.84s 2026-03-05 01:10:50.660029 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 19.08s 2026-03-05 01:10:50.660036 | orchestrator | designate : Restart designate-api container ---------------------------- 14.27s 2026-03-05 01:10:50.660042 | orchestrator | designate : Restart designate-mdns container --------------------------- 13.29s 2026-03-05 01:10:50.660049 | orchestrator | designate : Restart designate-central container ------------------------ 12.99s 2026-03-05 01:10:50.660063 | orchestrator | designate : Restart designate-producer container ----------------------- 11.90s 2026-03-05 01:10:50.660070 | orchestrator | designate : Restart designate-worker container -------------------------- 9.90s 2026-03-05 01:10:50.660077 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 8.97s 2026-03-05 01:10:50.660084 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.66s 2026-03-05 01:10:50.660090 | orchestrator | designate : Copying over config.json files for services ----------------- 8.45s 2026-03-05 01:10:50.660096 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.61s 2026-03-05 01:10:50.660110 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.33s 2026-03-05 01:10:50.660116 | orchestrator | designate : Check designate containers ---------------------------------- 6.41s 2026-03-05 01:10:50.660122 | orchestrator | designate : Copying over rndc.key --------------------------------------- 5.06s 2026-03-05 01:10:50.660128 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.85s 2026-03-05 01:10:50.660133 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.37s 2026-03-05 01:10:50.660140 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.36s 2026-03-05 01:10:50.660146 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.94s 2026-03-05 01:10:50.660153 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.93s 2026-03-05 01:10:50.660159 | orchestrator | 2026-03-05 01:10:50.660166 | orchestrator | 2026-03-05 01:10:50.660172 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:10:50.660178 | orchestrator | 2026-03-05 01:10:50.660184 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:10:50.660191 | orchestrator | Thursday 05 March 2026 01:06:21 +0000 (0:00:00.795) 0:00:00.795 ******** 2026-03-05 01:10:50.660197 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:10:50.660204 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:10:50.660210 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:10:50.660216 | orchestrator | 2026-03-05 01:10:50.660223 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:10:50.660229 | orchestrator | Thursday 05 March 2026 01:06:21 +0000 (0:00:00.611) 0:00:01.407 ******** 2026-03-05 01:10:50.660235 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-05 01:10:50.660242 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-05 01:10:50.660248 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-05 01:10:50.660254 | orchestrator | 2026-03-05 01:10:50.660261 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-05 01:10:50.660267 | orchestrator | 2026-03-05 01:10:50.660273 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-05 01:10:50.660279 | orchestrator | Thursday 05 March 2026 01:06:22 +0000 (0:00:00.837) 0:00:02.246 ******** 2026-03-05 01:10:50.660283 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:10:50.660288 | orchestrator | 2026-03-05 01:10:50.660291 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-05 01:10:50.660296 | orchestrator | Thursday 05 March 2026 01:06:23 +0000 (0:00:01.127) 0:00:03.373 ******** 2026-03-05 01:10:50.660303 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-05 01:10:50.660308 | orchestrator | 2026-03-05 01:10:50.660314 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-05 01:10:50.660320 | orchestrator | Thursday 05 March 2026 01:06:27 +0000 (0:00:03.984) 0:00:07.358 ******** 2026-03-05 01:10:50.660326 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-05 01:10:50.660332 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-05 01:10:50.660339 | orchestrator | 2026-03-05 01:10:50.660345 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-05 01:10:50.660352 | orchestrator | Thursday 05 March 2026 01:06:35 +0000 (0:00:07.355) 0:00:14.714 ******** 2026-03-05 01:10:50.660358 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-05 01:10:50.660365 | orchestrator | 2026-03-05 01:10:50.660371 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-05 01:10:50.660377 | orchestrator | Thursday 05 March 2026 01:06:39 +0000 (0:00:03.730) 0:00:18.444 ******** 2026-03-05 01:10:50.660384 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-05 01:10:50.660398 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-05 01:10:50.660404 | orchestrator | 2026-03-05 01:10:50.660418 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-05 01:10:50.660425 | orchestrator | Thursday 05 March 2026 01:06:43 +0000 (0:00:04.360) 0:00:22.805 ******** 2026-03-05 01:10:50.660431 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-05 01:10:50.660438 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-05 01:10:50.660444 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-05 01:10:50.660452 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-05 01:10:50.660458 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-05 01:10:50.660465 | orchestrator | 2026-03-05 01:10:50.660472 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-05 01:10:50.660479 | orchestrator | Thursday 05 March 2026 01:07:01 +0000 (0:00:17.644) 0:00:40.450 ******** 2026-03-05 01:10:50.660486 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-05 01:10:50.660493 | orchestrator | 2026-03-05 01:10:50.660500 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-05 01:10:50.660507 | orchestrator | Thursday 05 March 2026 01:07:04 +0000 (0:00:03.738) 0:00:44.188 ******** 2026-03-05 01:10:50.660522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:10:50.660531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:10:50.660538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:10:50.660556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.660564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.660573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.660580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.660587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.660593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.660600 | orchestrator | 2026-03-05 01:10:50.660607 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-05 01:10:50.660618 | orchestrator | Thursday 05 March 2026 01:07:07 +0000 (0:00:02.611) 0:00:46.800 ******** 2026-03-05 01:10:50.660624 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-05 01:10:50.660631 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-05 01:10:50.660637 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-05 01:10:50.660644 | orchestrator | 2026-03-05 01:10:50.660648 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-05 01:10:50.660652 | orchestrator | Thursday 05 March 2026 01:07:09 +0000 (0:00:01.746) 0:00:48.546 ******** 2026-03-05 01:10:50.660656 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:10:50.660659 | orchestrator | 2026-03-05 01:10:50.660663 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-05 01:10:50.660667 | orchestrator | Thursday 05 March 2026 01:07:09 +0000 (0:00:00.286) 0:00:48.832 ******** 2026-03-05 01:10:50.660671 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:10:50.660674 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:10:50.660678 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:10:50.660682 | orchestrator | 2026-03-05 01:10:50.660688 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-05 01:10:50.660693 | orchestrator | Thursday 05 March 2026 01:07:10 +0000 (0:00:00.664) 0:00:49.497 ******** 2026-03-05 01:10:50.660697 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:10:50.660701 | orchestrator | 2026-03-05 01:10:50.660704 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-05 01:10:50.660708 | orchestrator | Thursday 05 March 2026 01:07:10 +0000 (0:00:00.700) 0:00:50.197 ******** 2026-03-05 01:10:50.660715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:10:50.660807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:10:50.660819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:10:50.660837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.660843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.660854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.660861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.660873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.660880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.660892 | orchestrator | 2026-03-05 01:10:50.660899 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-05 01:10:50.660905 | orchestrator | Thursday 05 March 2026 01:07:15 +0000 (0:00:04.465) 0:00:54.662 ******** 2026-03-05 01:10:50.660911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-05 01:10:50.660940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.660953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.660959 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:10:50.660971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-05 01:10:50.660978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.660986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.660989 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:10:50.660993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-05 01:10:50.660997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.661004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.661008 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:10:50.661012 | orchestrator | 2026-03-05 01:10:50.661016 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-05 01:10:50.661019 | orchestrator | Thursday 05 March 2026 01:07:17 +0000 (0:00:02.753) 0:00:57.415 ******** 2026-03-05 01:10:50.661026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-05 01:10:50.661035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.661039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.661043 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:10:50.661047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-05 01:10:50.661054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.661058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.661065 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:10:50.661072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-05 01:10:50.661076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.661080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.661084 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:10:50.661088 | orchestrator | 2026-03-05 01:10:50.661092 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-05 01:10:50.661095 | orchestrator | Thursday 05 March 2026 01:07:20 +0000 (0:00:02.668) 0:01:00.083 ******** 2026-03-05 01:10:50.661102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:10:50.661109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:10:50.661117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:10:50.661121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.661125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.661129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.661136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.661146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.661150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.661154 | orchestrator | 2026-03-05 01:10:50.661158 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-05 01:10:50.661162 | orchestrator | Thursday 05 March 2026 01:07:25 +0000 (0:00:04.902) 0:01:04.986 ******** 2026-03-05 01:10:50.661166 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:10:50.661170 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:10:50.661174 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:10:50.661177 | orchestrator | 2026-03-05 01:10:50.661181 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-05 01:10:50.661185 | orchestrator | Thursday 05 March 2026 01:07:28 +0000 (0:00:02.947) 0:01:07.933 ******** 2026-03-05 01:10:50.661189 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 01:10:50.661193 | orchestrator | 2026-03-05 01:10:50.661196 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-05 01:10:50.661200 | orchestrator | Thursday 05 March 2026 01:07:31 +0000 (0:00:02.807) 0:01:10.741 ******** 2026-03-05 01:10:50.661204 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:10:50.661208 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:10:50.661211 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:10:50.661215 | orchestrator | 2026-03-05 01:10:50.661219 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-05 01:10:50.661223 | orchestrator | Thursday 05 March 2026 01:07:32 +0000 (0:00:01.122) 0:01:11.864 ******** 2026-03-05 01:10:50.661227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:10:50.661233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:10:50.661247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:10:50.661251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.661255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.661259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.661263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.661275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.661280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.661284 | orchestrator | 2026-03-05 01:10:50.661288 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-05 01:10:50.661292 | orchestrator | Thursday 05 March 2026 01:07:46 +0000 (0:00:13.920) 0:01:25.785 ******** 2026-03-05 01:10:50.661298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-05 01:10:50.661302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.661306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.661310 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:10:50.661317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-05 01:10:50.661325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.661332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.661336 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:10:50.661340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-05 01:10:50.661344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.661348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:10:50.661355 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:10:50.661359 | orchestrator | 2026-03-05 01:10:50.661363 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-05 01:10:50.661367 | orchestrator | Thursday 05 March 2026 01:07:48 +0000 (0:00:01.716) 0:01:27.502 ******** 2026-03-05 01:10:50.661373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:10:50.661380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:10:50.661384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:10:50.661388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.661396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.661402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.661406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.661415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.661419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:10:50.661423 | orchestrator | 2026-03-05 01:10:50.661427 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-05 01:10:50.661433 | orchestrator | Thursday 05 March 2026 01:07:54 +0000 (0:00:06.270) 0:01:33.773 ******** 2026-03-05 01:10:50.661439 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:10:50.661445 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:10:50.661451 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:10:50.661457 | orchestrator | 2026-03-05 01:10:50.661463 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-05 01:10:50.661469 | orchestrator | Thursday 05 March 2026 01:07:55 +0000 (0:00:01.456) 0:01:35.229 ******** 2026-03-05 01:10:50.661475 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:10:50.661487 | orchestrator | 2026-03-05 01:10:50.661494 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-05 01:10:50.661500 | orchestrator | Thursday 05 March 2026 01:07:58 +0000 (0:00:02.931) 0:01:38.160 ******** 2026-03-05 01:10:50.661507 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:10:50.661513 | orchestrator | 2026-03-05 01:10:50.661519 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-05 01:10:50.661525 | orchestrator | Thursday 05 March 2026 01:08:01 +0000 (0:00:02.671) 0:01:40.832 ******** 2026-03-05 01:10:50.661531 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:10:50.661537 | orchestrator | 2026-03-05 01:10:50.661544 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-05 01:10:50.661550 | orchestrator | Thursday 05 March 2026 01:08:16 +0000 (0:00:15.286) 0:01:56.118 ******** 2026-03-05 01:10:50.661556 | orchestrator | 2026-03-05 01:10:50.661563 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-05 01:10:50.661570 | orchestrator | Thursday 05 March 2026 01:08:16 +0000 (0:00:00.251) 0:01:56.369 ******** 2026-03-05 01:10:50.661576 | orchestrator | 2026-03-05 01:10:50.661582 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-05 01:10:50.661588 | orchestrator | Thursday 05 March 2026 01:08:17 +0000 (0:00:00.122) 0:01:56.493 ******** 2026-03-05 01:10:50.661595 | orchestrator | 2026-03-05 01:10:50.661600 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-05 01:10:50.661606 | orchestrator | Thursday 05 March 2026 01:08:17 +0000 (0:00:00.071) 0:01:56.565 ******** 2026-03-05 01:10:50.661612 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:10:50.661619 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:10:50.661625 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:10:50.661631 | orchestrator | 2026-03-05 01:10:50.661637 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-05 01:10:50.661643 | orchestrator | Thursday 05 March 2026 01:08:32 +0000 (0:00:15.034) 0:02:11.600 ******** 2026-03-05 01:10:50.661649 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:10:50.661655 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:10:50.661658 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:10:50.661662 | orchestrator | 2026-03-05 01:10:50.661666 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-05 01:10:50.661673 | orchestrator | Thursday 05 March 2026 01:08:41 +0000 (0:00:09.627) 0:02:21.227 ******** 2026-03-05 01:10:50.661677 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:10:50.661681 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:10:50.661684 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:10:50.661688 | orchestrator | 2026-03-05 01:10:50.661692 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:10:50.661696 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 01:10:50.661701 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-05 01:10:50.661705 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-05 01:10:50.661709 | orchestrator | 2026-03-05 01:10:50.661712 | orchestrator | 2026-03-05 01:10:50.661716 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:10:50.661720 | orchestrator | Thursday 05 March 2026 01:08:53 +0000 (0:00:11.678) 0:02:32.906 ******** 2026-03-05 01:10:50.661724 | orchestrator | =============================================================================== 2026-03-05 01:10:50.661728 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.64s 2026-03-05 01:10:50.661735 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 15.29s 2026-03-05 01:10:50.661739 | orchestrator | barbican : Restart barbican-api container ------------------------------ 15.03s 2026-03-05 01:10:50.661747 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 13.92s 2026-03-05 01:10:50.661751 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.68s 2026-03-05 01:10:50.661755 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.63s 2026-03-05 01:10:50.661758 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.36s 2026-03-05 01:10:50.661762 | orchestrator | barbican : Check barbican containers ------------------------------------ 6.27s 2026-03-05 01:10:50.661766 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.90s 2026-03-05 01:10:50.661769 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.47s 2026-03-05 01:10:50.661773 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.36s 2026-03-05 01:10:50.661777 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.99s 2026-03-05 01:10:50.661781 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.74s 2026-03-05 01:10:50.661784 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.73s 2026-03-05 01:10:50.661788 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.95s 2026-03-05 01:10:50.661792 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.93s 2026-03-05 01:10:50.661795 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.81s 2026-03-05 01:10:50.661799 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.75s 2026-03-05 01:10:50.661803 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.67s 2026-03-05 01:10:50.661807 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 2.67s 2026-03-05 01:10:50.661811 | orchestrator | 2026-03-05 01:10:50 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:10:50.661815 | orchestrator | 2026-03-05 01:10:50 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:10:50.661819 | orchestrator | 2026-03-05 01:10:50 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:10:53.766516 | orchestrator | 2026-03-05 01:10:53 | INFO  | Task f6b5b90d-0587-4b1c-84ba-f3b5e190021c is in state STARTED 2026-03-05 01:10:53.766608 | orchestrator | 2026-03-05 01:10:53 | INFO  | Task 6c897f70-350c-40e0-861e-66cc3d459075 is in state STARTED 2026-03-05 01:10:53.766616 | orchestrator | 2026-03-05 01:10:53 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:10:53.766623 | orchestrator | 2026-03-05 01:10:53 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:10:53.766630 | orchestrator | 2026-03-05 01:10:53 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:10:56.740307 | orchestrator | 2026-03-05 01:10:56 | INFO  | Task f6b5b90d-0587-4b1c-84ba-f3b5e190021c is in state STARTED 2026-03-05 01:10:56.740739 | orchestrator | 2026-03-05 01:10:56 | INFO  | Task 6c897f70-350c-40e0-861e-66cc3d459075 is in state STARTED 2026-03-05 01:10:56.741550 | orchestrator | 2026-03-05 01:10:56 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:10:56.742320 | orchestrator | 2026-03-05 01:10:56 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:10:56.742343 | orchestrator | 2026-03-05 01:10:56 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:10:59.808651 | orchestrator | 2026-03-05 01:10:59 | INFO  | Task f6b5b90d-0587-4b1c-84ba-f3b5e190021c is in state STARTED 2026-03-05 01:10:59.809498 | orchestrator | 2026-03-05 01:10:59 | INFO  | Task 6c897f70-350c-40e0-861e-66cc3d459075 is in state STARTED 2026-03-05 01:10:59.810486 | orchestrator | 2026-03-05 01:10:59 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:10:59.811962 | orchestrator | 2026-03-05 01:10:59 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:10:59.811987 | orchestrator | 2026-03-05 01:10:59 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:02.850124 | orchestrator | 2026-03-05 01:11:02 | INFO  | Task f6b5b90d-0587-4b1c-84ba-f3b5e190021c is in state STARTED 2026-03-05 01:11:02.851515 | orchestrator | 2026-03-05 01:11:02 | INFO  | Task 6c897f70-350c-40e0-861e-66cc3d459075 is in state STARTED 2026-03-05 01:11:02.852875 | orchestrator | 2026-03-05 01:11:02 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:11:02.854679 | orchestrator | 2026-03-05 01:11:02 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:11:02.855076 | orchestrator | 2026-03-05 01:11:02 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:05.895756 | orchestrator | 2026-03-05 01:11:05 | INFO  | Task f6b5b90d-0587-4b1c-84ba-f3b5e190021c is in state STARTED 2026-03-05 01:11:05.897194 | orchestrator | 2026-03-05 01:11:05 | INFO  | Task 6c897f70-350c-40e0-861e-66cc3d459075 is in state STARTED 2026-03-05 01:11:05.898899 | orchestrator | 2026-03-05 01:11:05 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:11:05.900733 | orchestrator | 2026-03-05 01:11:05 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:11:05.900776 | orchestrator | 2026-03-05 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:08.937455 | orchestrator | 2026-03-05 01:11:08 | INFO  | Task f6b5b90d-0587-4b1c-84ba-f3b5e190021c is in state STARTED 2026-03-05 01:11:08.938225 | orchestrator | 2026-03-05 01:11:08 | INFO  | Task 6c897f70-350c-40e0-861e-66cc3d459075 is in state STARTED 2026-03-05 01:11:08.939519 | orchestrator | 2026-03-05 01:11:08 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:11:08.940601 | orchestrator | 2026-03-05 01:11:08 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:11:08.940638 | orchestrator | 2026-03-05 01:11:08 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:11.984628 | orchestrator | 2026-03-05 01:11:11 | INFO  | Task f6b5b90d-0587-4b1c-84ba-f3b5e190021c is in state STARTED 2026-03-05 01:11:11.985897 | orchestrator | 2026-03-05 01:11:11 | INFO  | Task 6c897f70-350c-40e0-861e-66cc3d459075 is in state STARTED 2026-03-05 01:11:11.987754 | orchestrator | 2026-03-05 01:11:11 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:11:11.988698 | orchestrator | 2026-03-05 01:11:11 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:11:11.988839 | orchestrator | 2026-03-05 01:11:11 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:15.026532 | orchestrator | 2026-03-05 01:11:15 | INFO  | Task f6b5b90d-0587-4b1c-84ba-f3b5e190021c is in state STARTED 2026-03-05 01:11:15.030894 | orchestrator | 2026-03-05 01:11:15 | INFO  | Task 6c897f70-350c-40e0-861e-66cc3d459075 is in state STARTED 2026-03-05 01:11:15.033021 | orchestrator | 2026-03-05 01:11:15 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:11:15.036737 | orchestrator | 2026-03-05 01:11:15 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:11:15.036829 | orchestrator | 2026-03-05 01:11:15 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:18.111481 | orchestrator | 2026-03-05 01:11:18 | INFO  | Task f6b5b90d-0587-4b1c-84ba-f3b5e190021c is in state STARTED 2026-03-05 01:11:18.111549 | orchestrator | 2026-03-05 01:11:18 | INFO  | Task 6c897f70-350c-40e0-861e-66cc3d459075 is in state STARTED 2026-03-05 01:11:18.111555 | orchestrator | 2026-03-05 01:11:18 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:11:18.111559 | orchestrator | 2026-03-05 01:11:18 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:11:18.111563 | orchestrator | 2026-03-05 01:11:18 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:21.105150 | orchestrator | 2026-03-05 01:11:21 | INFO  | Task f6b5b90d-0587-4b1c-84ba-f3b5e190021c is in state STARTED 2026-03-05 01:11:21.105398 | orchestrator | 2026-03-05 01:11:21 | INFO  | Task 6c897f70-350c-40e0-861e-66cc3d459075 is in state STARTED 2026-03-05 01:11:21.106737 | orchestrator | 2026-03-05 01:11:21 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:11:21.107859 | orchestrator | 2026-03-05 01:11:21 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:11:21.107924 | orchestrator | 2026-03-05 01:11:21 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:24.195765 | orchestrator | 2026-03-05 01:11:24 | INFO  | Task f6b5b90d-0587-4b1c-84ba-f3b5e190021c is in state STARTED 2026-03-05 01:11:24.195863 | orchestrator | 2026-03-05 01:11:24 | INFO  | Task 6c897f70-350c-40e0-861e-66cc3d459075 is in state STARTED 2026-03-05 01:11:24.195874 | orchestrator | 2026-03-05 01:11:24 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:11:24.195882 | orchestrator | 2026-03-05 01:11:24 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:11:24.195889 | orchestrator | 2026-03-05 01:11:24 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:27.188733 | orchestrator | 2026-03-05 01:11:27 | INFO  | Task f6b5b90d-0587-4b1c-84ba-f3b5e190021c is in state STARTED 2026-03-05 01:11:27.189266 | orchestrator | 2026-03-05 01:11:27 | INFO  | Task 6c897f70-350c-40e0-861e-66cc3d459075 is in state STARTED 2026-03-05 01:11:27.191238 | orchestrator | 2026-03-05 01:11:27 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:11:27.192135 | orchestrator | 2026-03-05 01:11:27 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:11:27.192174 | orchestrator | 2026-03-05 01:11:27 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:30.343848 | orchestrator | 2026-03-05 01:11:30 | INFO  | Task f6b5b90d-0587-4b1c-84ba-f3b5e190021c is in state STARTED 2026-03-05 01:11:30.343979 | orchestrator | 2026-03-05 01:11:30 | INFO  | Task 6c897f70-350c-40e0-861e-66cc3d459075 is in state STARTED 2026-03-05 01:11:30.343992 | orchestrator | 2026-03-05 01:11:30 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:11:30.343999 | orchestrator | 2026-03-05 01:11:30 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:11:30.344007 | orchestrator | 2026-03-05 01:11:30 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:33.379561 | orchestrator | 2026-03-05 01:11:33 | INFO  | Task f6b5b90d-0587-4b1c-84ba-f3b5e190021c is in state STARTED 2026-03-05 01:11:33.380770 | orchestrator | 2026-03-05 01:11:33 | INFO  | Task 6c897f70-350c-40e0-861e-66cc3d459075 is in state STARTED 2026-03-05 01:11:33.383182 | orchestrator | 2026-03-05 01:11:33 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:11:33.385125 | orchestrator | 2026-03-05 01:11:33 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:11:33.385237 | orchestrator | 2026-03-05 01:11:33 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:36.420556 | orchestrator | 2026-03-05 01:11:36 | INFO  | Task f6b5b90d-0587-4b1c-84ba-f3b5e190021c is in state STARTED 2026-03-05 01:11:36.422270 | orchestrator | 2026-03-05 01:11:36 | INFO  | Task 6c897f70-350c-40e0-861e-66cc3d459075 is in state STARTED 2026-03-05 01:11:36.424033 | orchestrator | 2026-03-05 01:11:36 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:11:36.425800 | orchestrator | 2026-03-05 01:11:36 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:11:36.425840 | orchestrator | 2026-03-05 01:11:36 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:39.461383 | orchestrator | 2026-03-05 01:11:39 | INFO  | Task f6b5b90d-0587-4b1c-84ba-f3b5e190021c is in state STARTED 2026-03-05 01:11:39.463620 | orchestrator | 2026-03-05 01:11:39 | INFO  | Task 6c897f70-350c-40e0-861e-66cc3d459075 is in state STARTED 2026-03-05 01:11:39.465519 | orchestrator | 2026-03-05 01:11:39 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:11:39.467405 | orchestrator | 2026-03-05 01:11:39 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:11:39.467453 | orchestrator | 2026-03-05 01:11:39 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:42.505176 | orchestrator | 2026-03-05 01:11:42 | INFO  | Task f6b5b90d-0587-4b1c-84ba-f3b5e190021c is in state STARTED 2026-03-05 01:11:42.506198 | orchestrator | 2026-03-05 01:11:42 | INFO  | Task 6c897f70-350c-40e0-861e-66cc3d459075 is in state STARTED 2026-03-05 01:11:42.507413 | orchestrator | 2026-03-05 01:11:42 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:11:42.508321 | orchestrator | 2026-03-05 01:11:42 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:11:42.508363 | orchestrator | 2026-03-05 01:11:42 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:45.550298 | orchestrator | 2026-03-05 01:11:45 | INFO  | Task f6b5b90d-0587-4b1c-84ba-f3b5e190021c is in state STARTED 2026-03-05 01:11:45.551085 | orchestrator | 2026-03-05 01:11:45 | INFO  | Task 6c897f70-350c-40e0-861e-66cc3d459075 is in state STARTED 2026-03-05 01:11:45.551999 | orchestrator | 2026-03-05 01:11:45 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:11:45.553273 | orchestrator | 2026-03-05 01:11:45 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:11:45.553310 | orchestrator | 2026-03-05 01:11:45 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:48.589664 | orchestrator | 2026-03-05 01:11:48 | INFO  | Task f6b5b90d-0587-4b1c-84ba-f3b5e190021c is in state STARTED 2026-03-05 01:11:48.591696 | orchestrator | 2026-03-05 01:11:48 | INFO  | Task 6c897f70-350c-40e0-861e-66cc3d459075 is in state STARTED 2026-03-05 01:11:48.593008 | orchestrator | 2026-03-05 01:11:48 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:11:48.593954 | orchestrator | 2026-03-05 01:11:48 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:11:48.593979 | orchestrator | 2026-03-05 01:11:48 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:51.639580 | orchestrator | 2026-03-05 01:11:51 | INFO  | Task f6b5b90d-0587-4b1c-84ba-f3b5e190021c is in state STARTED 2026-03-05 01:11:51.639676 | orchestrator | 2026-03-05 01:11:51 | INFO  | Task 6c897f70-350c-40e0-861e-66cc3d459075 is in state STARTED 2026-03-05 01:11:51.639684 | orchestrator | 2026-03-05 01:11:51 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:11:51.640199 | orchestrator | 2026-03-05 01:11:51 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:11:51.640225 | orchestrator | 2026-03-05 01:11:51 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:54.686105 | orchestrator | 2026-03-05 01:11:54 | INFO  | Task f6b5b90d-0587-4b1c-84ba-f3b5e190021c is in state SUCCESS 2026-03-05 01:11:54.688162 | orchestrator | 2026-03-05 01:11:54 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:11:54.690549 | orchestrator | 2026-03-05 01:11:54 | INFO  | Task 6c897f70-350c-40e0-861e-66cc3d459075 is in state STARTED 2026-03-05 01:11:54.692645 | orchestrator | 2026-03-05 01:11:54 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:11:54.694813 | orchestrator | 2026-03-05 01:11:54 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:11:54.694851 | orchestrator | 2026-03-05 01:11:54 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:57.739466 | orchestrator | 2026-03-05 01:11:57 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:11:57.740511 | orchestrator | 2026-03-05 01:11:57 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:11:57.742533 | orchestrator | 2026-03-05 01:11:57 | INFO  | Task 6c897f70-350c-40e0-861e-66cc3d459075 is in state SUCCESS 2026-03-05 01:11:57.742699 | orchestrator | 2026-03-05 01:11:57.742715 | orchestrator | 2026-03-05 01:11:57.742720 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-05 01:11:57.742725 | orchestrator | 2026-03-05 01:11:57.742729 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-05 01:11:57.742734 | orchestrator | Thursday 05 March 2026 01:09:02 +0000 (0:00:00.230) 0:00:00.230 ******** 2026-03-05 01:11:57.742738 | orchestrator | changed: [localhost] 2026-03-05 01:11:57.742743 | orchestrator | 2026-03-05 01:11:57.742747 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-05 01:11:57.742751 | orchestrator | Thursday 05 March 2026 01:09:03 +0000 (0:00:01.345) 0:00:01.576 ******** 2026-03-05 01:11:57.742755 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-03-05 01:11:57.742759 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (2 retries left). 2026-03-05 01:11:57.742763 | orchestrator | changed: [localhost] 2026-03-05 01:11:57.742767 | orchestrator | 2026-03-05 01:11:57.742770 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-05 01:11:57.742802 | orchestrator | Thursday 05 March 2026 01:11:00 +0000 (0:01:56.380) 0:01:57.957 ******** 2026-03-05 01:11:57.742807 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-03-05 01:11:57.742839 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (2 retries left). 2026-03-05 01:11:57.742844 | orchestrator | changed: [localhost] 2026-03-05 01:11:57.742848 | orchestrator | 2026-03-05 01:11:57.742852 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:11:57.742855 | orchestrator | 2026-03-05 01:11:57.742859 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:11:57.742863 | orchestrator | Thursday 05 March 2026 01:11:51 +0000 (0:00:50.992) 0:02:48.949 ******** 2026-03-05 01:11:57.742953 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:11:57.742958 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:11:57.742962 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:11:57.742966 | orchestrator | 2026-03-05 01:11:57.742969 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:11:57.742973 | orchestrator | Thursday 05 March 2026 01:11:51 +0000 (0:00:00.351) 0:02:49.301 ******** 2026-03-05 01:11:57.742995 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-03-05 01:11:57.743000 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-03-05 01:11:57.743004 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-03-05 01:11:57.743008 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-03-05 01:11:57.743011 | orchestrator | 2026-03-05 01:11:57.743015 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-03-05 01:11:57.743019 | orchestrator | skipping: no hosts matched 2026-03-05 01:11:57.743023 | orchestrator | 2026-03-05 01:11:57.743027 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:11:57.743031 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:11:57.743038 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:11:57.743045 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:11:57.743048 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:11:57.743052 | orchestrator | 2026-03-05 01:11:57.743056 | orchestrator | 2026-03-05 01:11:57.743060 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:11:57.743063 | orchestrator | Thursday 05 March 2026 01:11:52 +0000 (0:00:00.694) 0:02:49.996 ******** 2026-03-05 01:11:57.743067 | orchestrator | =============================================================================== 2026-03-05 01:11:57.743071 | orchestrator | Download ironic-agent initramfs --------------------------------------- 116.38s 2026-03-05 01:11:57.743075 | orchestrator | Download ironic-agent kernel ------------------------------------------- 50.99s 2026-03-05 01:11:57.743078 | orchestrator | Ensure the destination directory exists --------------------------------- 1.35s 2026-03-05 01:11:57.743082 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.69s 2026-03-05 01:11:57.743086 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-03-05 01:11:57.743090 | orchestrator | 2026-03-05 01:11:57.743929 | orchestrator | 2026-03-05 01:11:57.743988 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:11:57.743997 | orchestrator | 2026-03-05 01:11:57.744004 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:11:57.744010 | orchestrator | Thursday 05 March 2026 01:10:40 +0000 (0:00:00.633) 0:00:00.633 ******** 2026-03-05 01:11:57.744016 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:11:57.744023 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:11:57.744029 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:11:57.744034 | orchestrator | 2026-03-05 01:11:57.744040 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:11:57.744046 | orchestrator | Thursday 05 March 2026 01:10:40 +0000 (0:00:00.320) 0:00:00.954 ******** 2026-03-05 01:11:57.744052 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-05 01:11:57.744058 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-05 01:11:57.744064 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-05 01:11:57.744069 | orchestrator | 2026-03-05 01:11:57.744075 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-05 01:11:57.744080 | orchestrator | 2026-03-05 01:11:57.744086 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-05 01:11:57.744092 | orchestrator | Thursday 05 March 2026 01:10:41 +0000 (0:00:00.392) 0:00:01.347 ******** 2026-03-05 01:11:57.744097 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:11:57.744103 | orchestrator | 2026-03-05 01:11:57.744109 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-05 01:11:57.744128 | orchestrator | Thursday 05 March 2026 01:10:41 +0000 (0:00:00.565) 0:00:01.913 ******** 2026-03-05 01:11:57.744135 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-05 01:11:57.744141 | orchestrator | 2026-03-05 01:11:57.744146 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-05 01:11:57.744152 | orchestrator | Thursday 05 March 2026 01:10:45 +0000 (0:00:04.118) 0:00:06.031 ******** 2026-03-05 01:11:57.744158 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-05 01:11:57.744164 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-05 01:11:57.744170 | orchestrator | 2026-03-05 01:11:57.744175 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-05 01:11:57.744191 | orchestrator | Thursday 05 March 2026 01:10:52 +0000 (0:00:07.098) 0:00:13.129 ******** 2026-03-05 01:11:57.744211 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-05 01:11:57.744224 | orchestrator | 2026-03-05 01:11:57.744229 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-05 01:11:57.744235 | orchestrator | Thursday 05 March 2026 01:10:56 +0000 (0:00:03.305) 0:00:16.434 ******** 2026-03-05 01:11:57.744240 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-05 01:11:57.744246 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-05 01:11:57.744252 | orchestrator | 2026-03-05 01:11:57.744258 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-05 01:11:57.744264 | orchestrator | Thursday 05 March 2026 01:11:00 +0000 (0:00:04.571) 0:00:21.006 ******** 2026-03-05 01:11:57.744270 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-05 01:11:57.744275 | orchestrator | 2026-03-05 01:11:57.744281 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-05 01:11:57.744287 | orchestrator | Thursday 05 March 2026 01:11:04 +0000 (0:00:03.951) 0:00:24.958 ******** 2026-03-05 01:11:57.744292 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-05 01:11:57.744298 | orchestrator | 2026-03-05 01:11:57.744304 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-05 01:11:57.744348 | orchestrator | Thursday 05 March 2026 01:11:08 +0000 (0:00:04.154) 0:00:29.112 ******** 2026-03-05 01:11:57.744355 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:11:57.744360 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:11:57.744366 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:11:57.744375 | orchestrator | 2026-03-05 01:11:57.744385 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-05 01:11:57.744394 | orchestrator | Thursday 05 March 2026 01:11:09 +0000 (0:00:00.378) 0:00:29.491 ******** 2026-03-05 01:11:57.744409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:11:57.744437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:11:57.744458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:11:57.744469 | orchestrator | 2026-03-05 01:11:57.744478 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-05 01:11:57.744501 | orchestrator | Thursday 05 March 2026 01:11:10 +0000 (0:00:00.949) 0:00:30.440 ******** 2026-03-05 01:11:57.744512 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:11:57.744520 | orchestrator | 2026-03-05 01:11:57.744530 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-05 01:11:57.744540 | orchestrator | Thursday 05 March 2026 01:11:10 +0000 (0:00:00.143) 0:00:30.584 ******** 2026-03-05 01:11:57.744549 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:11:57.744558 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:11:57.744567 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:11:57.744575 | orchestrator | 2026-03-05 01:11:57.744584 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-05 01:11:57.744594 | orchestrator | Thursday 05 March 2026 01:11:10 +0000 (0:00:00.639) 0:00:31.223 ******** 2026-03-05 01:11:57.744605 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:11:57.744615 | orchestrator | 2026-03-05 01:11:57.744624 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-05 01:11:57.744634 | orchestrator | Thursday 05 March 2026 01:11:11 +0000 (0:00:00.649) 0:00:31.873 ******** 2026-03-05 01:11:57.744646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:11:57.744668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:11:57.744690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:11:57.744701 | orchestrator | 2026-03-05 01:11:57.744712 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-05 01:11:57.744723 | orchestrator | Thursday 05 March 2026 01:11:13 +0000 (0:00:01.665) 0:00:33.539 ******** 2026-03-05 01:11:57.744740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-05 01:11:57.744750 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:11:57.744762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-05 01:11:57.744772 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:11:57.744789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-05 01:11:57.744808 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:11:57.744820 | orchestrator | 2026-03-05 01:11:57.744830 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-05 01:11:57.744840 | orchestrator | Thursday 05 March 2026 01:11:14 +0000 (0:00:00.922) 0:00:34.461 ******** 2026-03-05 01:11:57.744850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-05 01:11:57.744859 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:11:57.744907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-05 01:11:57.744919 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:11:57.744930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-05 01:11:57.744948 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:11:57.744958 | orchestrator | 2026-03-05 01:11:57.744966 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-05 01:11:57.744974 | orchestrator | Thursday 05 March 2026 01:11:15 +0000 (0:00:00.980) 0:00:35.442 ******** 2026-03-05 01:11:57.744991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:11:57.745002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:11:57.745026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:11:57.745038 | orchestrator | 2026-03-05 01:11:57.745047 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-05 01:11:57.745057 | orchestrator | Thursday 05 March 2026 01:11:16 +0000 (0:00:01.455) 0:00:36.897 ******** 2026-03-05 01:11:57.745067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:11:57.745081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:11:57.745094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:11:57.745100 | orchestrator | 2026-03-05 01:11:57.745105 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-05 01:11:57.745111 | orchestrator | Thursday 05 March 2026 01:11:19 +0000 (0:00:03.379) 0:00:40.277 ******** 2026-03-05 01:11:57.745117 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-05 01:11:57.745123 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-05 01:11:57.745129 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-05 01:11:57.745135 | orchestrator | 2026-03-05 01:11:57.745141 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-05 01:11:57.745146 | orchestrator | Thursday 05 March 2026 01:11:22 +0000 (0:00:02.767) 0:00:43.044 ******** 2026-03-05 01:11:57.745152 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:11:57.745158 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:11:57.745164 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:11:57.745170 | orchestrator | 2026-03-05 01:11:57.745175 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-05 01:11:57.745181 | orchestrator | Thursday 05 March 2026 01:11:25 +0000 (0:00:02.395) 0:00:45.439 ******** 2026-03-05 01:11:57.745190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-05 01:11:57.745201 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:11:57.745207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-05 01:11:57.745213 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:11:57.745224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-05 01:11:57.745230 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:11:57.745236 | orchestrator | 2026-03-05 01:11:57.745241 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-05 01:11:57.745247 | orchestrator | Thursday 05 March 2026 01:11:25 +0000 (0:00:00.742) 0:00:46.182 ******** 2026-03-05 01:11:57.745253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:11:57.745262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:11:57.745273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:11:57.745278 | orchestrator | 2026-03-05 01:11:57.745284 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-05 01:11:57.745290 | orchestrator | Thursday 05 March 2026 01:11:27 +0000 (0:00:01.657) 0:00:47.840 ******** 2026-03-05 01:11:57.745295 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:11:57.745300 | orchestrator | 2026-03-05 01:11:57.745306 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-05 01:11:57.745312 | orchestrator | Thursday 05 March 2026 01:11:31 +0000 (0:00:03.714) 0:00:51.555 ******** 2026-03-05 01:11:57.745317 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:11:57.745322 | orchestrator | 2026-03-05 01:11:57.745328 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-05 01:11:57.745334 | orchestrator | Thursday 05 March 2026 01:11:33 +0000 (0:00:02.697) 0:00:54.252 ******** 2026-03-05 01:11:57.745339 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:11:57.745345 | orchestrator | 2026-03-05 01:11:57.745351 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-05 01:11:57.745357 | orchestrator | Thursday 05 March 2026 01:11:50 +0000 (0:00:16.077) 0:01:10.330 ******** 2026-03-05 01:11:57.745362 | orchestrator | 2026-03-05 01:11:57.745368 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-05 01:11:57.745373 | orchestrator | Thursday 05 March 2026 01:11:50 +0000 (0:00:00.102) 0:01:10.432 ******** 2026-03-05 01:11:57.745379 | orchestrator | 2026-03-05 01:11:57.745389 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-05 01:11:57.745396 | orchestrator | Thursday 05 March 2026 01:11:50 +0000 (0:00:00.074) 0:01:10.506 ******** 2026-03-05 01:11:57.745401 | orchestrator | 2026-03-05 01:11:57.745407 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-05 01:11:57.745413 | orchestrator | Thursday 05 March 2026 01:11:50 +0000 (0:00:00.084) 0:01:10.591 ******** 2026-03-05 01:11:57.745418 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:11:57.745424 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:11:57.745429 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:11:57.745435 | orchestrator | 2026-03-05 01:11:57.745440 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:11:57.745447 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-05 01:11:57.745454 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-05 01:11:57.745460 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-05 01:11:57.745472 | orchestrator | 2026-03-05 01:11:57.745478 | orchestrator | 2026-03-05 01:11:57.745483 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:11:57.745489 | orchestrator | Thursday 05 March 2026 01:11:55 +0000 (0:00:05.609) 0:01:16.200 ******** 2026-03-05 01:11:57.745495 | orchestrator | =============================================================================== 2026-03-05 01:11:57.745501 | orchestrator | placement : Running placement bootstrap container ---------------------- 16.08s 2026-03-05 01:11:57.745506 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.10s 2026-03-05 01:11:57.745512 | orchestrator | placement : Restart placement-api container ----------------------------- 5.61s 2026-03-05 01:11:57.745517 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.57s 2026-03-05 01:11:57.745523 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.15s 2026-03-05 01:11:57.745528 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.12s 2026-03-05 01:11:57.745533 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.95s 2026-03-05 01:11:57.745542 | orchestrator | placement : Creating placement databases -------------------------------- 3.71s 2026-03-05 01:11:57.745548 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.38s 2026-03-05 01:11:57.745554 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.31s 2026-03-05 01:11:57.745559 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.77s 2026-03-05 01:11:57.745564 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.70s 2026-03-05 01:11:57.745570 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.40s 2026-03-05 01:11:57.745575 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.67s 2026-03-05 01:11:57.745581 | orchestrator | placement : Check placement containers ---------------------------------- 1.66s 2026-03-05 01:11:57.745586 | orchestrator | placement : Copying over config.json files for services ----------------- 1.46s 2026-03-05 01:11:57.745592 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.98s 2026-03-05 01:11:57.745597 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.95s 2026-03-05 01:11:57.745603 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.92s 2026-03-05 01:11:57.745612 | orchestrator | placement : Copying over existing policy file --------------------------- 0.74s 2026-03-05 01:11:57.745751 | orchestrator | 2026-03-05 01:11:57 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:11:57.746190 | orchestrator | 2026-03-05 01:11:57 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:11:57.746271 | orchestrator | 2026-03-05 01:11:57 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:00.783026 | orchestrator | 2026-03-05 01:12:00 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:12:00.784362 | orchestrator | 2026-03-05 01:12:00 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:12:00.785269 | orchestrator | 2026-03-05 01:12:00 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:12:00.786423 | orchestrator | 2026-03-05 01:12:00 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:12:00.786485 | orchestrator | 2026-03-05 01:12:00 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:03.824845 | orchestrator | 2026-03-05 01:12:03 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:12:03.825638 | orchestrator | 2026-03-05 01:12:03 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:12:03.826998 | orchestrator | 2026-03-05 01:12:03 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:12:03.829830 | orchestrator | 2026-03-05 01:12:03 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:12:03.829921 | orchestrator | 2026-03-05 01:12:03 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:06.865956 | orchestrator | 2026-03-05 01:12:06 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:12:06.868610 | orchestrator | 2026-03-05 01:12:06 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:12:06.870804 | orchestrator | 2026-03-05 01:12:06 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:12:06.872744 | orchestrator | 2026-03-05 01:12:06 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:12:06.872782 | orchestrator | 2026-03-05 01:12:06 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:09.914056 | orchestrator | 2026-03-05 01:12:09 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:12:09.916067 | orchestrator | 2026-03-05 01:12:09 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:12:09.916997 | orchestrator | 2026-03-05 01:12:09 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:12:09.917716 | orchestrator | 2026-03-05 01:12:09 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:12:09.917749 | orchestrator | 2026-03-05 01:12:09 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:12.950742 | orchestrator | 2026-03-05 01:12:12 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:12:12.952693 | orchestrator | 2026-03-05 01:12:12 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:12:12.952812 | orchestrator | 2026-03-05 01:12:12 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:12:12.954345 | orchestrator | 2026-03-05 01:12:12 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:12:12.954388 | orchestrator | 2026-03-05 01:12:12 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:15.985987 | orchestrator | 2026-03-05 01:12:15 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:12:15.988490 | orchestrator | 2026-03-05 01:12:15 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:12:15.990632 | orchestrator | 2026-03-05 01:12:15 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:12:15.992917 | orchestrator | 2026-03-05 01:12:15 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:12:15.993109 | orchestrator | 2026-03-05 01:12:15 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:19.034169 | orchestrator | 2026-03-05 01:12:19 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:12:19.036100 | orchestrator | 2026-03-05 01:12:19 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:12:19.038133 | orchestrator | 2026-03-05 01:12:19 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:12:19.039568 | orchestrator | 2026-03-05 01:12:19 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:12:19.039604 | orchestrator | 2026-03-05 01:12:19 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:22.071528 | orchestrator | 2026-03-05 01:12:22 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:12:22.071649 | orchestrator | 2026-03-05 01:12:22 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:12:22.072773 | orchestrator | 2026-03-05 01:12:22 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:12:22.074664 | orchestrator | 2026-03-05 01:12:22 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state STARTED 2026-03-05 01:12:22.074723 | orchestrator | 2026-03-05 01:12:22 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:25.192213 | orchestrator | 2026-03-05 01:12:25 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:12:25.193347 | orchestrator | 2026-03-05 01:12:25 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:12:25.195368 | orchestrator | 2026-03-05 01:12:25 | INFO  | Task 67e57e23-14ae-4369-bbb7-30e6912525b4 is in state STARTED 2026-03-05 01:12:25.196329 | orchestrator | 2026-03-05 01:12:25 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:12:25.199567 | orchestrator | 2026-03-05 01:12:25 | INFO  | Task 2007cb41-b21d-48ad-acaa-222e2de02aba is in state SUCCESS 2026-03-05 01:12:25.201133 | orchestrator | 2026-03-05 01:12:25.201180 | orchestrator | 2026-03-05 01:12:25.201189 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:12:25.201197 | orchestrator | 2026-03-05 01:12:25.201203 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:12:25.201210 | orchestrator | Thursday 05 March 2026 01:06:15 +0000 (0:00:00.348) 0:00:00.348 ******** 2026-03-05 01:12:25.201217 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:12:25.201225 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:12:25.201232 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:12:25.201238 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:12:25.201244 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:12:25.201250 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:12:25.201257 | orchestrator | 2026-03-05 01:12:25.201263 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:12:25.201270 | orchestrator | Thursday 05 March 2026 01:06:16 +0000 (0:00:00.808) 0:00:01.156 ******** 2026-03-05 01:12:25.201276 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-05 01:12:25.201284 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-05 01:12:25.201291 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-05 01:12:25.201297 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-05 01:12:25.201304 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-05 01:12:25.201312 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-05 01:12:25.201319 | orchestrator | 2026-03-05 01:12:25.201325 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-05 01:12:25.201332 | orchestrator | 2026-03-05 01:12:25.201339 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-05 01:12:25.201346 | orchestrator | Thursday 05 March 2026 01:06:16 +0000 (0:00:00.739) 0:00:01.896 ******** 2026-03-05 01:12:25.201355 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:12:25.201363 | orchestrator | 2026-03-05 01:12:25.201371 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-05 01:12:25.201378 | orchestrator | Thursday 05 March 2026 01:06:18 +0000 (0:00:01.480) 0:00:03.376 ******** 2026-03-05 01:12:25.201385 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:12:25.201392 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:12:25.201399 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:12:25.201424 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:12:25.201431 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:12:25.201438 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:12:25.201465 | orchestrator | 2026-03-05 01:12:25.201472 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-05 01:12:25.201479 | orchestrator | Thursday 05 March 2026 01:06:20 +0000 (0:00:01.908) 0:00:05.285 ******** 2026-03-05 01:12:25.201486 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:12:25.201492 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:12:25.201500 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:12:25.201507 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:12:25.201514 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:12:25.201521 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:12:25.201528 | orchestrator | 2026-03-05 01:12:25.201535 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-05 01:12:25.201542 | orchestrator | Thursday 05 March 2026 01:06:22 +0000 (0:00:02.414) 0:00:07.699 ******** 2026-03-05 01:12:25.201549 | orchestrator | ok: [testbed-node-0] => { 2026-03-05 01:12:25.201557 | orchestrator |  "changed": false, 2026-03-05 01:12:25.201565 | orchestrator |  "msg": "All assertions passed" 2026-03-05 01:12:25.201572 | orchestrator | } 2026-03-05 01:12:25.201580 | orchestrator | ok: [testbed-node-1] => { 2026-03-05 01:12:25.201587 | orchestrator |  "changed": false, 2026-03-05 01:12:25.201594 | orchestrator |  "msg": "All assertions passed" 2026-03-05 01:12:25.201601 | orchestrator | } 2026-03-05 01:12:25.201608 | orchestrator | ok: [testbed-node-2] => { 2026-03-05 01:12:25.201672 | orchestrator |  "changed": false, 2026-03-05 01:12:25.201682 | orchestrator |  "msg": "All assertions passed" 2026-03-05 01:12:25.201689 | orchestrator | } 2026-03-05 01:12:25.201696 | orchestrator | ok: [testbed-node-3] => { 2026-03-05 01:12:25.201703 | orchestrator |  "changed": false, 2026-03-05 01:12:25.201711 | orchestrator |  "msg": "All assertions passed" 2026-03-05 01:12:25.201719 | orchestrator | } 2026-03-05 01:12:25.201726 | orchestrator | ok: [testbed-node-4] => { 2026-03-05 01:12:25.201733 | orchestrator |  "changed": false, 2026-03-05 01:12:25.201740 | orchestrator |  "msg": "All assertions passed" 2026-03-05 01:12:25.201747 | orchestrator | } 2026-03-05 01:12:25.201754 | orchestrator | ok: [testbed-node-5] => { 2026-03-05 01:12:25.201762 | orchestrator |  "changed": false, 2026-03-05 01:12:25.201770 | orchestrator |  "msg": "All assertions passed" 2026-03-05 01:12:25.201779 | orchestrator | } 2026-03-05 01:12:25.201787 | orchestrator | 2026-03-05 01:12:25.201794 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-05 01:12:25.201801 | orchestrator | Thursday 05 March 2026 01:06:24 +0000 (0:00:01.677) 0:00:09.377 ******** 2026-03-05 01:12:25.201808 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.201815 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.201822 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.201830 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.201837 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.201910 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.201918 | orchestrator | 2026-03-05 01:12:25.201927 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-05 01:12:25.201934 | orchestrator | Thursday 05 March 2026 01:06:24 +0000 (0:00:00.764) 0:00:10.141 ******** 2026-03-05 01:12:25.201940 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-05 01:12:25.201947 | orchestrator | 2026-03-05 01:12:25.201953 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-05 01:12:25.201960 | orchestrator | Thursday 05 March 2026 01:06:28 +0000 (0:00:03.905) 0:00:14.047 ******** 2026-03-05 01:12:25.201968 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-05 01:12:25.201975 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-05 01:12:25.201981 | orchestrator | 2026-03-05 01:12:25.202003 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-05 01:12:25.202011 | orchestrator | Thursday 05 March 2026 01:06:36 +0000 (0:00:07.342) 0:00:21.390 ******** 2026-03-05 01:12:25.202154 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-05 01:12:25.202163 | orchestrator | 2026-03-05 01:12:25.202169 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-05 01:12:25.202176 | orchestrator | Thursday 05 March 2026 01:06:39 +0000 (0:00:03.737) 0:00:25.127 ******** 2026-03-05 01:12:25.202183 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-05 01:12:25.202189 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-05 01:12:25.202197 | orchestrator | 2026-03-05 01:12:25.202204 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-05 01:12:25.202211 | orchestrator | Thursday 05 March 2026 01:06:44 +0000 (0:00:04.446) 0:00:29.574 ******** 2026-03-05 01:12:25.202216 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-05 01:12:25.202223 | orchestrator | 2026-03-05 01:12:25.202230 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-05 01:12:25.202236 | orchestrator | Thursday 05 March 2026 01:06:48 +0000 (0:00:03.826) 0:00:33.401 ******** 2026-03-05 01:12:25.202244 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-05 01:12:25.202251 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-05 01:12:25.202257 | orchestrator | 2026-03-05 01:12:25.202264 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-05 01:12:25.202271 | orchestrator | Thursday 05 March 2026 01:06:56 +0000 (0:00:08.359) 0:00:41.760 ******** 2026-03-05 01:12:25.202278 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.202285 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.202292 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.202299 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.202308 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.202315 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.202322 | orchestrator | 2026-03-05 01:12:25.202329 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-05 01:12:25.202336 | orchestrator | Thursday 05 March 2026 01:06:57 +0000 (0:00:00.860) 0:00:42.621 ******** 2026-03-05 01:12:25.202342 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.202386 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.202394 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.202454 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.202463 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.202469 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.202519 | orchestrator | 2026-03-05 01:12:25.202527 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-05 01:12:25.202535 | orchestrator | Thursday 05 March 2026 01:06:59 +0000 (0:00:02.437) 0:00:45.058 ******** 2026-03-05 01:12:25.202542 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:12:25.202550 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:12:25.202557 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:12:25.202564 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:12:25.202571 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:12:25.202578 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:12:25.202584 | orchestrator | 2026-03-05 01:12:25.202591 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-05 01:12:25.202597 | orchestrator | Thursday 05 March 2026 01:07:01 +0000 (0:00:01.176) 0:00:46.235 ******** 2026-03-05 01:12:25.202603 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.202609 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.202616 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.202623 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.202649 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.202656 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.202662 | orchestrator | 2026-03-05 01:12:25.202667 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-05 01:12:25.202673 | orchestrator | Thursday 05 March 2026 01:07:04 +0000 (0:00:03.328) 0:00:49.564 ******** 2026-03-05 01:12:25.202693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:12:25.202717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:12:25.202724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:12:25.202736 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:12:25.202745 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:12:25.202756 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:12:25.202763 | orchestrator | 2026-03-05 01:12:25.202770 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-05 01:12:25.202777 | orchestrator | Thursday 05 March 2026 01:07:09 +0000 (0:00:05.088) 0:00:54.652 ******** 2026-03-05 01:12:25.202784 | orchestrator | [WARNING]: Skipped 2026-03-05 01:12:25.202792 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-05 01:12:25.202799 | orchestrator | due to this access issue: 2026-03-05 01:12:25.202806 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-05 01:12:25.202812 | orchestrator | a directory 2026-03-05 01:12:25.202818 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 01:12:25.202824 | orchestrator | 2026-03-05 01:12:25.202831 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-05 01:12:25.202865 | orchestrator | Thursday 05 March 2026 01:07:10 +0000 (0:00:01.137) 0:00:55.790 ******** 2026-03-05 01:12:25.202874 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:12:25.202882 | orchestrator | 2026-03-05 01:12:25.202889 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-05 01:12:25.202896 | orchestrator | Thursday 05 March 2026 01:07:12 +0000 (0:00:01.875) 0:00:57.666 ******** 2026-03-05 01:12:25.202903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:12:25.202915 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:12:25.202927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:12:25.202934 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:12:25.202947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:12:25.202954 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:12:25.202960 | orchestrator | 2026-03-05 01:12:25.202966 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-05 01:12:25.202973 | orchestrator | Thursday 05 March 2026 01:07:18 +0000 (0:00:05.978) 0:01:03.644 ******** 2026-03-05 01:12:25.202985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:12:25.202998 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.203005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:12:25.203012 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.203019 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:12:25.203026 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.203039 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:12:25.203046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:12:25.203059 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.203066 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.203076 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:12:25.203085 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.203091 | orchestrator | 2026-03-05 01:12:25.203099 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-05 01:12:25.203105 | orchestrator | Thursday 05 March 2026 01:07:23 +0000 (0:00:05.223) 0:01:08.868 ******** 2026-03-05 01:12:25.203112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:12:25.203119 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.203130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:12:25.203138 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.203145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:12:25.203152 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.203163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:12:25.203180 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.203187 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:12:25.203194 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.203201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:12:25.203209 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.203215 | orchestrator | 2026-03-05 01:12:25.203222 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-05 01:12:25.203229 | orchestrator | Thursday 05 March 2026 01:07:28 +0000 (0:00:04.532) 0:01:13.400 ******** 2026-03-05 01:12:25.203236 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.203243 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.203250 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.203257 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.203263 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.203270 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.203277 | orchestrator | 2026-03-05 01:12:25.203284 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-05 01:12:25.203295 | orchestrator | Thursday 05 March 2026 01:07:32 +0000 (0:00:04.135) 0:01:17.535 ******** 2026-03-05 01:12:25.203301 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.203308 | orchestrator | 2026-03-05 01:12:25.203316 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-05 01:12:25.203323 | orchestrator | Thursday 05 March 2026 01:07:32 +0000 (0:00:00.176) 0:01:17.711 ******** 2026-03-05 01:12:25.203329 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.203336 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.203343 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.203350 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.203363 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.203369 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.203376 | orchestrator | 2026-03-05 01:12:25.203383 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-05 01:12:25.203390 | orchestrator | Thursday 05 March 2026 01:07:33 +0000 (0:00:00.966) 0:01:18.678 ******** 2026-03-05 01:12:25.203397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:12:25.203404 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.203415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:12:25.203424 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.203431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:12:25.203439 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.203710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:12:25.203733 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.203741 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:12:25.203749 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.203756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:12:25.203767 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.203774 | orchestrator | 2026-03-05 01:12:25.203780 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-05 01:12:25.203788 | orchestrator | Thursday 05 March 2026 01:07:39 +0000 (0:00:06.094) 0:01:24.773 ******** 2026-03-05 01:12:25.203818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:12:25.203829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:12:25.203865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:12:25.203879 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:12:25.203891 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:12:25.203899 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:12:25.203906 | orchestrator | 2026-03-05 01:12:25.203912 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-05 01:12:25.203920 | orchestrator | Thursday 05 March 2026 01:07:46 +0000 (0:00:07.021) 0:01:31.794 ******** 2026-03-05 01:12:25.203926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:12:25.203943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:12:25.203949 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:12:25.203959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:12:25.203965 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:12:25.203971 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:12:25.203982 | orchestrator | 2026-03-05 01:12:25.203988 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-05 01:12:25.203995 | orchestrator | Thursday 05 March 2026 01:07:59 +0000 (0:00:12.562) 0:01:44.356 ******** 2026-03-05 01:12:25.204006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:12:25.204013 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.204021 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:12:25.204028 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.204038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:12:25.204045 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.204051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:12:25.204062 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.204069 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:12:25.204076 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.204087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:12:25.204095 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.204101 | orchestrator | 2026-03-05 01:12:25.204108 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-05 01:12:25.204114 | orchestrator | Thursday 05 March 2026 01:08:04 +0000 (0:00:05.294) 0:01:49.651 ******** 2026-03-05 01:12:25.204120 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.204127 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:12:25.204132 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.204138 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.204144 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:12:25.204150 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:12:25.204156 | orchestrator | 2026-03-05 01:12:25.204163 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-05 01:12:25.204169 | orchestrator | Thursday 05 March 2026 01:08:09 +0000 (0:00:05.072) 0:01:54.724 ******** 2026-03-05 01:12:25.204179 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:12:25.204187 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.204194 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:12:25.204205 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.204212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:12:25.204218 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.204231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:12:25.204238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:12:25.204249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:12:25.204256 | orchestrator | 2026-03-05 01:12:25.204263 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-05 01:12:25.204270 | orchestrator | Thursday 05 March 2026 01:08:14 +0000 (0:00:05.153) 0:01:59.878 ******** 2026-03-05 01:12:25.204285 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.204291 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.204298 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.204304 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.204310 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.204317 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.204323 | orchestrator | 2026-03-05 01:12:25.204330 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-05 01:12:25.204336 | orchestrator | Thursday 05 March 2026 01:08:18 +0000 (0:00:03.328) 0:02:03.206 ******** 2026-03-05 01:12:25.204343 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.204350 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.204357 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.204364 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.204371 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.204377 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.204383 | orchestrator | 2026-03-05 01:12:25.204389 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-05 01:12:25.204396 | orchestrator | Thursday 05 March 2026 01:08:22 +0000 (0:00:04.487) 0:02:07.694 ******** 2026-03-05 01:12:25.204403 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.204410 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.204416 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.204422 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.204429 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.204435 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.204442 | orchestrator | 2026-03-05 01:12:25.204449 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-05 01:12:25.204457 | orchestrator | Thursday 05 March 2026 01:08:28 +0000 (0:00:05.881) 0:02:13.575 ******** 2026-03-05 01:12:25.204464 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.204471 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.204479 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.204486 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.204493 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.204500 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.204507 | orchestrator | 2026-03-05 01:12:25.204514 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-05 01:12:25.204520 | orchestrator | Thursday 05 March 2026 01:08:31 +0000 (0:00:02.947) 0:02:16.522 ******** 2026-03-05 01:12:25.204527 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.204534 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.204540 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.204547 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.204557 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.204564 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.204570 | orchestrator | 2026-03-05 01:12:25.204576 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-05 01:12:25.204583 | orchestrator | Thursday 05 March 2026 01:08:34 +0000 (0:00:03.209) 0:02:19.732 ******** 2026-03-05 01:12:25.204591 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.204597 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.204603 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.204610 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.204616 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.204622 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.204629 | orchestrator | 2026-03-05 01:12:25.204636 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-05 01:12:25.204642 | orchestrator | Thursday 05 March 2026 01:08:37 +0000 (0:00:02.655) 0:02:22.387 ******** 2026-03-05 01:12:25.204649 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-05 01:12:25.204663 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.204670 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-05 01:12:25.204676 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.204682 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-05 01:12:25.204688 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.204695 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-05 01:12:25.204701 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.204708 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-05 01:12:25.204714 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.204721 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-05 01:12:25.204728 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.204734 | orchestrator | 2026-03-05 01:12:25.204741 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-05 01:12:25.204748 | orchestrator | Thursday 05 March 2026 01:08:40 +0000 (0:00:03.085) 0:02:25.472 ******** 2026-03-05 01:12:25.204759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:12:25.204766 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.204772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:12:25.204779 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.204790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:12:25.204801 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.204807 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:12:25.204814 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.204820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:12:25.204826 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.204831 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:12:25.204837 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.204866 | orchestrator | 2026-03-05 01:12:25.204872 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-05 01:12:25.204878 | orchestrator | Thursday 05 March 2026 01:08:43 +0000 (0:00:03.609) 0:02:29.081 ******** 2026-03-05 01:12:25.204911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:12:25.204919 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.204932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:12:25.204944 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.204949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:12:25.204959 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:12:25.204965 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.204970 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.204976 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:12:25.204983 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.204988 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:12:25.205000 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.205007 | orchestrator | 2026-03-05 01:12:25.205012 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-05 01:12:25.205018 | orchestrator | Thursday 05 March 2026 01:08:47 +0000 (0:00:03.362) 0:02:32.443 ******** 2026-03-05 01:12:25.205024 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.205034 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.205040 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.205046 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.205051 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.205057 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.205063 | orchestrator | 2026-03-05 01:12:25.205069 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-05 01:12:25.205074 | orchestrator | Thursday 05 March 2026 01:08:50 +0000 (0:00:02.928) 0:02:35.372 ******** 2026-03-05 01:12:25.205080 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.205086 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.205091 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.205097 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:12:25.205103 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:12:25.205108 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:12:25.205114 | orchestrator | 2026-03-05 01:12:25.205120 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-05 01:12:25.205126 | orchestrator | Thursday 05 March 2026 01:08:56 +0000 (0:00:05.822) 0:02:41.195 ******** 2026-03-05 01:12:25.205132 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.205138 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.205144 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.205149 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.205156 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.205161 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.205167 | orchestrator | 2026-03-05 01:12:25.205173 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-05 01:12:25.205179 | orchestrator | Thursday 05 March 2026 01:08:59 +0000 (0:00:02.986) 0:02:44.181 ******** 2026-03-05 01:12:25.205185 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.205191 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.205198 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.205204 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.205211 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.205218 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.205225 | orchestrator | 2026-03-05 01:12:25.205231 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-05 01:12:25.205237 | orchestrator | Thursday 05 March 2026 01:09:02 +0000 (0:00:03.903) 0:02:48.085 ******** 2026-03-05 01:12:25.205244 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.205251 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.205258 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.205263 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.205275 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.205283 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.205290 | orchestrator | 2026-03-05 01:12:25.205296 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-05 01:12:25.205302 | orchestrator | Thursday 05 March 2026 01:09:09 +0000 (0:00:06.399) 0:02:54.484 ******** 2026-03-05 01:12:25.205309 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.205317 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.205323 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.205329 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.205336 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.205349 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.205355 | orchestrator | 2026-03-05 01:12:25.205363 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-05 01:12:25.205370 | orchestrator | Thursday 05 March 2026 01:09:13 +0000 (0:00:03.921) 0:02:58.406 ******** 2026-03-05 01:12:25.205376 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.205383 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.205390 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.205396 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.205403 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.205410 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.205416 | orchestrator | 2026-03-05 01:12:25.205422 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-05 01:12:25.205429 | orchestrator | Thursday 05 March 2026 01:09:15 +0000 (0:00:02.631) 0:03:01.037 ******** 2026-03-05 01:12:25.205436 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.205443 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.205450 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.205456 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.205463 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.205470 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.205477 | orchestrator | 2026-03-05 01:12:25.205483 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-05 01:12:25.205490 | orchestrator | Thursday 05 March 2026 01:09:19 +0000 (0:00:03.422) 0:03:04.460 ******** 2026-03-05 01:12:25.205497 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.205503 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.205509 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.205516 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.205523 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.205530 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.205537 | orchestrator | 2026-03-05 01:12:25.205543 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-05 01:12:25.205550 | orchestrator | Thursday 05 March 2026 01:09:22 +0000 (0:00:03.211) 0:03:07.672 ******** 2026-03-05 01:12:25.205557 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-05 01:12:25.205565 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.205571 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-05 01:12:25.205577 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.205584 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-05 01:12:25.205591 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.205598 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-05 01:12:25.205605 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.205616 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-05 01:12:25.205622 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.205628 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-05 01:12:25.205635 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.205641 | orchestrator | 2026-03-05 01:12:25.205649 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-05 01:12:25.205655 | orchestrator | Thursday 05 March 2026 01:09:27 +0000 (0:00:05.143) 0:03:12.816 ******** 2026-03-05 01:12:25.205662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:12:25.205677 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.205688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:12:25.205696 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.205703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:12:25.205710 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.205717 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:12:25.205724 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.205736 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:12:25.205747 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.205754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:12:25.205761 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.205768 | orchestrator | 2026-03-05 01:12:25.205774 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-05 01:12:25.205781 | orchestrator | Thursday 05 March 2026 01:09:31 +0000 (0:00:04.282) 0:03:17.098 ******** 2026-03-05 01:12:25.205792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:12:25.205800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:12:25.205812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:12:25.205821 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:12:25.205859 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:12:25.205868 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:12:25.205874 | orchestrator | 2026-03-05 01:12:25.205880 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-05 01:12:25.205887 | orchestrator | Thursday 05 March 2026 01:09:35 +0000 (0:00:03.618) 0:03:20.717 ******** 2026-03-05 01:12:25.205892 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:12:25.205898 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:12:25.205904 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:12:25.205910 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:12:25.205916 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:12:25.205921 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:12:25.205927 | orchestrator | 2026-03-05 01:12:25.205933 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-05 01:12:25.205938 | orchestrator | Thursday 05 March 2026 01:09:36 +0000 (0:00:00.804) 0:03:21.521 ******** 2026-03-05 01:12:25.205943 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:12:25.205949 | orchestrator | 2026-03-05 01:12:25.205955 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-05 01:12:25.205960 | orchestrator | Thursday 05 March 2026 01:09:39 +0000 (0:00:02.806) 0:03:24.328 ******** 2026-03-05 01:12:25.205966 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:12:25.205971 | orchestrator | 2026-03-05 01:12:25.205977 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-05 01:12:25.205983 | orchestrator | Thursday 05 March 2026 01:09:41 +0000 (0:00:02.727) 0:03:27.055 ******** 2026-03-05 01:12:25.205988 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:12:25.205994 | orchestrator | 2026-03-05 01:12:25.206000 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-05 01:12:25.206050 | orchestrator | Thursday 05 March 2026 01:10:48 +0000 (0:01:06.897) 0:04:33.953 ******** 2026-03-05 01:12:25.206074 | orchestrator | 2026-03-05 01:12:25.206084 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-05 01:12:25.206090 | orchestrator | Thursday 05 March 2026 01:10:48 +0000 (0:00:00.074) 0:04:34.028 ******** 2026-03-05 01:12:25.206096 | orchestrator | 2026-03-05 01:12:25.206109 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-05 01:12:25.206118 | orchestrator | Thursday 05 March 2026 01:10:49 +0000 (0:00:00.311) 0:04:34.339 ******** 2026-03-05 01:12:25.206124 | orchestrator | 2026-03-05 01:12:25.206130 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-05 01:12:25.206136 | orchestrator | Thursday 05 March 2026 01:10:49 +0000 (0:00:00.070) 0:04:34.410 ******** 2026-03-05 01:12:25.206145 | orchestrator | 2026-03-05 01:12:25.206159 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-05 01:12:25.206166 | orchestrator | Thursday 05 March 2026 01:10:49 +0000 (0:00:00.075) 0:04:34.486 ******** 2026-03-05 01:12:25.206180 | orchestrator | 2026-03-05 01:12:25.206189 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-05 01:12:25.206195 | orchestrator | Thursday 05 March 2026 01:10:49 +0000 (0:00:00.084) 0:04:34.570 ******** 2026-03-05 01:12:25.206200 | orchestrator | 2026-03-05 01:12:25.206206 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-05 01:12:25.206212 | orchestrator | Thursday 05 March 2026 01:10:49 +0000 (0:00:00.075) 0:04:34.646 ******** 2026-03-05 01:12:25.206218 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:12:25.206224 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:12:25.206230 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:12:25.206235 | orchestrator | 2026-03-05 01:12:25.206242 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-05 01:12:25.206248 | orchestrator | Thursday 05 March 2026 01:11:20 +0000 (0:00:30.620) 0:05:05.267 ******** 2026-03-05 01:12:25.206254 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:12:25.206259 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:12:25.206265 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:12:25.206270 | orchestrator | 2026-03-05 01:12:25.206277 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:12:25.206283 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-05 01:12:25.206292 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-05 01:12:25.206298 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-05 01:12:25.206304 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-05 01:12:25.206316 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-05 01:12:25.206322 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-05 01:12:25.206328 | orchestrator | 2026-03-05 01:12:25.206334 | orchestrator | 2026-03-05 01:12:25.206340 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:12:25.206346 | orchestrator | Thursday 05 March 2026 01:12:22 +0000 (0:01:02.812) 0:06:08.080 ******** 2026-03-05 01:12:25.206352 | orchestrator | =============================================================================== 2026-03-05 01:12:25.206359 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 66.90s 2026-03-05 01:12:25.206367 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 62.81s 2026-03-05 01:12:25.206373 | orchestrator | neutron : Restart neutron-server container ----------------------------- 30.62s 2026-03-05 01:12:25.206388 | orchestrator | neutron : Copying over neutron.conf ------------------------------------ 12.56s 2026-03-05 01:12:25.206394 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.36s 2026-03-05 01:12:25.206400 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.34s 2026-03-05 01:12:25.206406 | orchestrator | neutron : Copying over config.json files for services ------------------- 7.02s 2026-03-05 01:12:25.206412 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 6.40s 2026-03-05 01:12:25.206418 | orchestrator | neutron : Copying over existing policy file ----------------------------- 6.09s 2026-03-05 01:12:25.206424 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 5.98s 2026-03-05 01:12:25.206430 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 5.88s 2026-03-05 01:12:25.206436 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.82s 2026-03-05 01:12:25.206442 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 5.30s 2026-03-05 01:12:25.206448 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 5.22s 2026-03-05 01:12:25.206454 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 5.15s 2026-03-05 01:12:25.206460 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 5.15s 2026-03-05 01:12:25.206466 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 5.09s 2026-03-05 01:12:25.206472 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 5.07s 2026-03-05 01:12:25.206478 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 4.53s 2026-03-05 01:12:25.206484 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 4.49s 2026-03-05 01:12:25.206491 | orchestrator | 2026-03-05 01:12:25 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:28.238486 | orchestrator | 2026-03-05 01:12:28 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:12:28.240991 | orchestrator | 2026-03-05 01:12:28 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:12:28.246150 | orchestrator | 2026-03-05 01:12:28 | INFO  | Task 67e57e23-14ae-4369-bbb7-30e6912525b4 is in state STARTED 2026-03-05 01:12:28.246619 | orchestrator | 2026-03-05 01:12:28 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:12:28.246643 | orchestrator | 2026-03-05 01:12:28 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:31.280972 | orchestrator | 2026-03-05 01:12:31 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:12:31.282297 | orchestrator | 2026-03-05 01:12:31 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:12:31.284013 | orchestrator | 2026-03-05 01:12:31 | INFO  | Task 67e57e23-14ae-4369-bbb7-30e6912525b4 is in state STARTED 2026-03-05 01:12:31.285465 | orchestrator | 2026-03-05 01:12:31 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:12:31.285507 | orchestrator | 2026-03-05 01:12:31 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:34.335118 | orchestrator | 2026-03-05 01:12:34 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:12:34.336501 | orchestrator | 2026-03-05 01:12:34 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:12:34.337767 | orchestrator | 2026-03-05 01:12:34 | INFO  | Task 67e57e23-14ae-4369-bbb7-30e6912525b4 is in state SUCCESS 2026-03-05 01:12:34.340683 | orchestrator | 2026-03-05 01:12:34 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:12:34.342002 | orchestrator | 2026-03-05 01:12:34 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:37.396987 | orchestrator | 2026-03-05 01:12:37 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:12:37.397096 | orchestrator | 2026-03-05 01:12:37 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:12:37.397105 | orchestrator | 2026-03-05 01:12:37 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:12:37.398105 | orchestrator | 2026-03-05 01:12:37 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:12:37.398131 | orchestrator | 2026-03-05 01:12:37 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:40.440984 | orchestrator | 2026-03-05 01:12:40 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:12:40.441059 | orchestrator | 2026-03-05 01:12:40 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:12:40.441912 | orchestrator | 2026-03-05 01:12:40 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:12:40.443899 | orchestrator | 2026-03-05 01:12:40 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:12:40.443950 | orchestrator | 2026-03-05 01:12:40 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:43.480477 | orchestrator | 2026-03-05 01:12:43 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:12:43.481361 | orchestrator | 2026-03-05 01:12:43 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:12:43.482779 | orchestrator | 2026-03-05 01:12:43 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:12:43.483754 | orchestrator | 2026-03-05 01:12:43 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:12:43.483876 | orchestrator | 2026-03-05 01:12:43 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:46.526286 | orchestrator | 2026-03-05 01:12:46 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:12:46.527752 | orchestrator | 2026-03-05 01:12:46 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:12:46.529617 | orchestrator | 2026-03-05 01:12:46 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:12:46.531139 | orchestrator | 2026-03-05 01:12:46 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:12:46.531188 | orchestrator | 2026-03-05 01:12:46 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:49.568592 | orchestrator | 2026-03-05 01:12:49 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:12:49.571091 | orchestrator | 2026-03-05 01:12:49 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:12:49.572455 | orchestrator | 2026-03-05 01:12:49 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:12:49.574162 | orchestrator | 2026-03-05 01:12:49 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:12:49.574196 | orchestrator | 2026-03-05 01:12:49 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:52.599800 | orchestrator | 2026-03-05 01:12:52 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:12:52.600849 | orchestrator | 2026-03-05 01:12:52 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:12:52.601746 | orchestrator | 2026-03-05 01:12:52 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:12:52.602364 | orchestrator | 2026-03-05 01:12:52 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:12:52.602388 | orchestrator | 2026-03-05 01:12:52 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:55.623382 | orchestrator | 2026-03-05 01:12:55 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:12:55.623459 | orchestrator | 2026-03-05 01:12:55 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:12:55.624851 | orchestrator | 2026-03-05 01:12:55 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:12:55.625708 | orchestrator | 2026-03-05 01:12:55 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:12:55.625750 | orchestrator | 2026-03-05 01:12:55 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:58.655236 | orchestrator | 2026-03-05 01:12:58 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:12:58.655852 | orchestrator | 2026-03-05 01:12:58 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:12:58.656520 | orchestrator | 2026-03-05 01:12:58 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:12:58.658233 | orchestrator | 2026-03-05 01:12:58 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:12:58.658269 | orchestrator | 2026-03-05 01:12:58 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:01.682307 | orchestrator | 2026-03-05 01:13:01 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:13:01.683607 | orchestrator | 2026-03-05 01:13:01 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:13:01.684737 | orchestrator | 2026-03-05 01:13:01 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:13:01.685829 | orchestrator | 2026-03-05 01:13:01 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:13:01.685867 | orchestrator | 2026-03-05 01:13:01 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:04.717706 | orchestrator | 2026-03-05 01:13:04 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:13:04.719131 | orchestrator | 2026-03-05 01:13:04 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:13:04.720891 | orchestrator | 2026-03-05 01:13:04 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:13:04.722923 | orchestrator | 2026-03-05 01:13:04 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:13:04.722985 | orchestrator | 2026-03-05 01:13:04 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:07.757376 | orchestrator | 2026-03-05 01:13:07 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:13:07.759318 | orchestrator | 2026-03-05 01:13:07 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:13:07.760661 | orchestrator | 2026-03-05 01:13:07 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:13:07.761661 | orchestrator | 2026-03-05 01:13:07 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:13:07.761710 | orchestrator | 2026-03-05 01:13:07 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:10.799324 | orchestrator | 2026-03-05 01:13:10 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:13:10.799385 | orchestrator | 2026-03-05 01:13:10 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:13:10.799998 | orchestrator | 2026-03-05 01:13:10 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:13:10.800877 | orchestrator | 2026-03-05 01:13:10 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:13:10.800897 | orchestrator | 2026-03-05 01:13:10 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:13.827873 | orchestrator | 2026-03-05 01:13:13 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:13:13.828029 | orchestrator | 2026-03-05 01:13:13 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:13:13.831039 | orchestrator | 2026-03-05 01:13:13 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:13:13.831625 | orchestrator | 2026-03-05 01:13:13 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:13:13.831671 | orchestrator | 2026-03-05 01:13:13 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:16.874060 | orchestrator | 2026-03-05 01:13:16 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:13:16.877152 | orchestrator | 2026-03-05 01:13:16 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:13:16.879983 | orchestrator | 2026-03-05 01:13:16 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:13:16.883219 | orchestrator | 2026-03-05 01:13:16 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:13:16.883724 | orchestrator | 2026-03-05 01:13:16 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:19.940934 | orchestrator | 2026-03-05 01:13:19 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:13:19.942293 | orchestrator | 2026-03-05 01:13:19 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:13:19.943912 | orchestrator | 2026-03-05 01:13:19 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:13:19.946003 | orchestrator | 2026-03-05 01:13:19 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:13:19.946148 | orchestrator | 2026-03-05 01:13:19 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:22.985347 | orchestrator | 2026-03-05 01:13:22 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:13:22.986118 | orchestrator | 2026-03-05 01:13:22 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:13:22.987709 | orchestrator | 2026-03-05 01:13:22 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:13:22.988116 | orchestrator | 2026-03-05 01:13:22 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:13:22.988233 | orchestrator | 2026-03-05 01:13:22 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:26.041968 | orchestrator | 2026-03-05 01:13:26 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:13:26.042778 | orchestrator | 2026-03-05 01:13:26 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:13:26.044760 | orchestrator | 2026-03-05 01:13:26 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:13:26.046176 | orchestrator | 2026-03-05 01:13:26 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:13:26.046215 | orchestrator | 2026-03-05 01:13:26 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:29.097342 | orchestrator | 2026-03-05 01:13:29 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:13:29.099360 | orchestrator | 2026-03-05 01:13:29 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:13:29.099433 | orchestrator | 2026-03-05 01:13:29 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:13:29.099444 | orchestrator | 2026-03-05 01:13:29 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:13:29.099453 | orchestrator | 2026-03-05 01:13:29 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:32.141194 | orchestrator | 2026-03-05 01:13:32 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:13:32.141736 | orchestrator | 2026-03-05 01:13:32 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:13:32.142971 | orchestrator | 2026-03-05 01:13:32 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:13:32.143824 | orchestrator | 2026-03-05 01:13:32 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:13:32.143848 | orchestrator | 2026-03-05 01:13:32 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:35.181440 | orchestrator | 2026-03-05 01:13:35 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:13:35.182163 | orchestrator | 2026-03-05 01:13:35 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:13:35.184150 | orchestrator | 2026-03-05 01:13:35 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:13:35.185282 | orchestrator | 2026-03-05 01:13:35 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:13:35.185392 | orchestrator | 2026-03-05 01:13:35 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:38.214822 | orchestrator | 2026-03-05 01:13:38 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:13:38.216632 | orchestrator | 2026-03-05 01:13:38 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:13:38.218522 | orchestrator | 2026-03-05 01:13:38 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:13:38.219422 | orchestrator | 2026-03-05 01:13:38 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:13:38.219451 | orchestrator | 2026-03-05 01:13:38 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:41.283204 | orchestrator | 2026-03-05 01:13:41 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:13:41.285865 | orchestrator | 2026-03-05 01:13:41 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:13:41.287688 | orchestrator | 2026-03-05 01:13:41 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:13:41.291281 | orchestrator | 2026-03-05 01:13:41 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:13:41.291353 | orchestrator | 2026-03-05 01:13:41 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:44.355621 | orchestrator | 2026-03-05 01:13:44 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:13:44.357315 | orchestrator | 2026-03-05 01:13:44 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:13:44.360174 | orchestrator | 2026-03-05 01:13:44 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:13:44.364257 | orchestrator | 2026-03-05 01:13:44 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:13:44.364321 | orchestrator | 2026-03-05 01:13:44 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:47.399695 | orchestrator | 2026-03-05 01:13:47 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:13:47.400241 | orchestrator | 2026-03-05 01:13:47 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:13:47.400690 | orchestrator | 2026-03-05 01:13:47 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:13:47.401812 | orchestrator | 2026-03-05 01:13:47 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:13:47.401848 | orchestrator | 2026-03-05 01:13:47 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:50.439621 | orchestrator | 2026-03-05 01:13:50 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:13:50.442177 | orchestrator | 2026-03-05 01:13:50 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:13:50.444142 | orchestrator | 2026-03-05 01:13:50 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:13:50.446232 | orchestrator | 2026-03-05 01:13:50 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:13:50.446281 | orchestrator | 2026-03-05 01:13:50 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:53.490353 | orchestrator | 2026-03-05 01:13:53 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:13:53.490668 | orchestrator | 2026-03-05 01:13:53 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:13:53.492288 | orchestrator | 2026-03-05 01:13:53 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:13:53.493319 | orchestrator | 2026-03-05 01:13:53 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:13:53.493357 | orchestrator | 2026-03-05 01:13:53 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:56.529560 | orchestrator | 2026-03-05 01:13:56 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:13:56.531679 | orchestrator | 2026-03-05 01:13:56 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:13:56.534134 | orchestrator | 2026-03-05 01:13:56 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:13:56.536031 | orchestrator | 2026-03-05 01:13:56 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:13:56.536088 | orchestrator | 2026-03-05 01:13:56 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:59.568623 | orchestrator | 2026-03-05 01:13:59 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:13:59.569868 | orchestrator | 2026-03-05 01:13:59 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:13:59.571395 | orchestrator | 2026-03-05 01:13:59 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:13:59.572611 | orchestrator | 2026-03-05 01:13:59 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:13:59.572829 | orchestrator | 2026-03-05 01:13:59 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:02.615556 | orchestrator | 2026-03-05 01:14:02 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:14:02.616481 | orchestrator | 2026-03-05 01:14:02 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state STARTED 2026-03-05 01:14:02.618452 | orchestrator | 2026-03-05 01:14:02 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:14:02.620350 | orchestrator | 2026-03-05 01:14:02 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:14:02.620390 | orchestrator | 2026-03-05 01:14:02 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:05.658936 | orchestrator | 2026-03-05 01:14:05 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:14:05.660681 | orchestrator | 2026-03-05 01:14:05 | INFO  | Task 759ccc4e-add6-4f79-94bc-0b6860a1298b is in state SUCCESS 2026-03-05 01:14:05.662133 | orchestrator | 2026-03-05 01:14:05.662341 | orchestrator | 2026-03-05 01:14:05.662356 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:14:05.662368 | orchestrator | 2026-03-05 01:14:05.662405 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:14:05.662414 | orchestrator | Thursday 05 March 2026 01:12:30 +0000 (0:00:00.189) 0:00:00.189 ******** 2026-03-05 01:14:05.662423 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:14:05.662433 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:14:05.662442 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:14:05.662452 | orchestrator | 2026-03-05 01:14:05.662462 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:14:05.662471 | orchestrator | Thursday 05 March 2026 01:12:30 +0000 (0:00:00.318) 0:00:00.507 ******** 2026-03-05 01:14:05.662481 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-05 01:14:05.662491 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-05 01:14:05.662501 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-05 01:14:05.662511 | orchestrator | 2026-03-05 01:14:05.662521 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-05 01:14:05.662530 | orchestrator | 2026-03-05 01:14:05.662538 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-05 01:14:05.662547 | orchestrator | Thursday 05 March 2026 01:12:31 +0000 (0:00:00.793) 0:00:01.301 ******** 2026-03-05 01:14:05.662556 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:14:05.662565 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:14:05.662575 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:14:05.662585 | orchestrator | 2026-03-05 01:14:05.662595 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:14:05.662606 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:14:05.662619 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:14:05.662628 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:14:05.662638 | orchestrator | 2026-03-05 01:14:05.662647 | orchestrator | 2026-03-05 01:14:05.662658 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:14:05.662668 | orchestrator | Thursday 05 March 2026 01:12:32 +0000 (0:00:00.872) 0:00:02.174 ******** 2026-03-05 01:14:05.662679 | orchestrator | =============================================================================== 2026-03-05 01:14:05.662689 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.87s 2026-03-05 01:14:05.662700 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2026-03-05 01:14:05.662710 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-03-05 01:14:05.662721 | orchestrator | 2026-03-05 01:14:05.662731 | orchestrator | 2026-03-05 01:14:05.662742 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:14:05.662802 | orchestrator | 2026-03-05 01:14:05.662813 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:14:05.662824 | orchestrator | Thursday 05 March 2026 01:11:57 +0000 (0:00:00.331) 0:00:00.331 ******** 2026-03-05 01:14:05.662834 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:14:05.662843 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:14:05.662852 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:14:05.662862 | orchestrator | 2026-03-05 01:14:05.662872 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:14:05.662882 | orchestrator | Thursday 05 March 2026 01:11:58 +0000 (0:00:00.358) 0:00:00.690 ******** 2026-03-05 01:14:05.662892 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-05 01:14:05.662902 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-05 01:14:05.662912 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-05 01:14:05.662923 | orchestrator | 2026-03-05 01:14:05.662932 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-05 01:14:05.662942 | orchestrator | 2026-03-05 01:14:05.662951 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-05 01:14:05.662959 | orchestrator | Thursday 05 March 2026 01:11:58 +0000 (0:00:00.501) 0:00:01.192 ******** 2026-03-05 01:14:05.662968 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:14:05.662978 | orchestrator | 2026-03-05 01:14:05.662988 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-05 01:14:05.662996 | orchestrator | Thursday 05 March 2026 01:11:59 +0000 (0:00:00.696) 0:00:01.889 ******** 2026-03-05 01:14:05.663006 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-05 01:14:05.663015 | orchestrator | 2026-03-05 01:14:05.663024 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-05 01:14:05.663032 | orchestrator | Thursday 05 March 2026 01:12:03 +0000 (0:00:04.147) 0:00:06.036 ******** 2026-03-05 01:14:05.663041 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-05 01:14:05.663050 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-05 01:14:05.663059 | orchestrator | 2026-03-05 01:14:05.663069 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-05 01:14:05.663078 | orchestrator | Thursday 05 March 2026 01:12:10 +0000 (0:00:07.197) 0:00:13.234 ******** 2026-03-05 01:14:05.663087 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-05 01:14:05.663095 | orchestrator | 2026-03-05 01:14:05.663104 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-05 01:14:05.663113 | orchestrator | Thursday 05 March 2026 01:12:14 +0000 (0:00:03.783) 0:00:17.018 ******** 2026-03-05 01:14:05.663142 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-05 01:14:05.663152 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-05 01:14:05.663160 | orchestrator | 2026-03-05 01:14:05.663170 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-05 01:14:05.663180 | orchestrator | Thursday 05 March 2026 01:12:18 +0000 (0:00:04.274) 0:00:21.292 ******** 2026-03-05 01:14:05.663188 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-05 01:14:05.663196 | orchestrator | 2026-03-05 01:14:05.663205 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-05 01:14:05.663214 | orchestrator | Thursday 05 March 2026 01:12:22 +0000 (0:00:03.700) 0:00:24.992 ******** 2026-03-05 01:14:05.663223 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-05 01:14:05.663231 | orchestrator | 2026-03-05 01:14:05.663240 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-05 01:14:05.663249 | orchestrator | Thursday 05 March 2026 01:12:27 +0000 (0:00:04.802) 0:00:29.795 ******** 2026-03-05 01:14:05.663258 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:14:05.663277 | orchestrator | 2026-03-05 01:14:05.663285 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-05 01:14:05.663294 | orchestrator | Thursday 05 March 2026 01:12:31 +0000 (0:00:03.800) 0:00:33.595 ******** 2026-03-05 01:14:05.663303 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:14:05.663311 | orchestrator | 2026-03-05 01:14:05.663320 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-05 01:14:05.663329 | orchestrator | Thursday 05 March 2026 01:12:35 +0000 (0:00:04.608) 0:00:38.204 ******** 2026-03-05 01:14:05.663338 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:14:05.663346 | orchestrator | 2026-03-05 01:14:05.663354 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-05 01:14:05.663363 | orchestrator | Thursday 05 March 2026 01:12:39 +0000 (0:00:03.895) 0:00:42.099 ******** 2026-03-05 01:14:05.663378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:14:05.663392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:14:05.663402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:14:05.663421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:14:05.663440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:14:05.663449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:14:05.663460 | orchestrator | 2026-03-05 01:14:05.663468 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-05 01:14:05.663478 | orchestrator | Thursday 05 March 2026 01:12:41 +0000 (0:00:01.851) 0:00:43.951 ******** 2026-03-05 01:14:05.663487 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:14:05.663496 | orchestrator | 2026-03-05 01:14:05.663504 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-05 01:14:05.663513 | orchestrator | Thursday 05 March 2026 01:12:41 +0000 (0:00:00.156) 0:00:44.107 ******** 2026-03-05 01:14:05.663521 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:14:05.663530 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:14:05.663539 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:14:05.663546 | orchestrator | 2026-03-05 01:14:05.663555 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-05 01:14:05.663565 | orchestrator | Thursday 05 March 2026 01:12:42 +0000 (0:00:00.510) 0:00:44.617 ******** 2026-03-05 01:14:05.663575 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 01:14:05.663583 | orchestrator | 2026-03-05 01:14:05.663618 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-05 01:14:05.663627 | orchestrator | Thursday 05 March 2026 01:12:43 +0000 (0:00:00.990) 0:00:45.608 ******** 2026-03-05 01:14:05.663637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:14:05.663662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:14:05.663673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:14:05.663682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:14:05.663716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:14:05.663735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:14:05.663772 | orchestrator | 2026-03-05 01:14:05.663783 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-05 01:14:05.663794 | orchestrator | Thursday 05 March 2026 01:12:45 +0000 (0:00:02.714) 0:00:48.322 ******** 2026-03-05 01:14:05.663803 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:14:05.663812 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:14:05.663821 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:14:05.663831 | orchestrator | 2026-03-05 01:14:05.663841 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-05 01:14:05.663856 | orchestrator | Thursday 05 March 2026 01:12:46 +0000 (0:00:00.476) 0:00:48.799 ******** 2026-03-05 01:14:05.663865 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:14:05.663874 | orchestrator | 2026-03-05 01:14:05.663883 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-05 01:14:05.663891 | orchestrator | Thursday 05 March 2026 01:12:47 +0000 (0:00:00.939) 0:00:49.739 ******** 2026-03-05 01:14:05.663901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:14:05.663910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:14:05.663920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:14:05.663929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:14:05.663953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:14:05.663963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:14:05.663972 | orchestrator | 2026-03-05 01:14:05.663981 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-05 01:14:05.663989 | orchestrator | Thursday 05 March 2026 01:12:49 +0000 (0:00:02.635) 0:00:52.375 ******** 2026-03-05 01:14:05.663999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-05 01:14:05.664009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:14:05.664025 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:14:05.664035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-05 01:14:05.664052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:14:05.664062 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:14:05.664071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-05 01:14:05.664080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:14:05.664091 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:14:05.664100 | orchestrator | 2026-03-05 01:14:05.664109 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-05 01:14:05.664118 | orchestrator | Thursday 05 March 2026 01:12:50 +0000 (0:00:00.854) 0:00:53.229 ******** 2026-03-05 01:14:05.664128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-05 01:14:05.664145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:14:05.664155 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:14:05.664171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-05 01:14:05.664180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:14:05.664190 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:14:05.664199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-05 01:14:05.664215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:14:05.664224 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:14:05.664232 | orchestrator | 2026-03-05 01:14:05.664240 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-05 01:14:05.664249 | orchestrator | Thursday 05 March 2026 01:12:51 +0000 (0:00:01.204) 0:00:54.434 ******** 2026-03-05 01:14:05.664513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:14:05.664537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:14:05.664548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:14:05.664558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:14:05.664589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:14:05.664606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:14:05.664616 | orchestrator | 2026-03-05 01:14:05.664625 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-05 01:14:05.664634 | orchestrator | Thursday 05 March 2026 01:12:54 +0000 (0:00:02.301) 0:00:56.736 ******** 2026-03-05 01:14:05.664643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:14:05.664653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:14:05.664669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:14:05.664678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:14:05.664692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:14:05.664702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:14:05.664710 | orchestrator | 2026-03-05 01:14:05.664719 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-05 01:14:05.664727 | orchestrator | Thursday 05 March 2026 01:12:59 +0000 (0:00:05.482) 0:01:02.218 ******** 2026-03-05 01:14:05.664736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-05 01:14:05.664790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:14:05.664802 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:14:05.664813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-05 01:14:05.664856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:14:05.664868 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:14:05.664877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-05 01:14:05.664886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:14:05.664901 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:14:05.664910 | orchestrator | 2026-03-05 01:14:05.664918 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-05 01:14:05.664926 | orchestrator | Thursday 05 March 2026 01:13:00 +0000 (0:00:00.717) 0:01:02.936 ******** 2026-03-05 01:14:05.664934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:14:05.664972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:14:05.664984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:14:05.664994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:14:05.665010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:14:05.665019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:14:05.665028 | orchestrator | 2026-03-05 01:14:05.665036 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-05 01:14:05.665045 | orchestrator | Thursday 05 March 2026 01:13:02 +0000 (0:00:02.447) 0:01:05.383 ******** 2026-03-05 01:14:05.665053 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:14:05.665063 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:14:05.665071 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:14:05.665080 | orchestrator | 2026-03-05 01:14:05.665105 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-05 01:14:05.665115 | orchestrator | Thursday 05 March 2026 01:13:03 +0000 (0:00:00.289) 0:01:05.672 ******** 2026-03-05 01:14:05.665125 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:14:05.665134 | orchestrator | 2026-03-05 01:14:05.665144 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-05 01:14:05.665154 | orchestrator | Thursday 05 March 2026 01:13:05 +0000 (0:00:02.456) 0:01:08.129 ******** 2026-03-05 01:14:05.665164 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:14:05.665173 | orchestrator | 2026-03-05 01:14:05.665183 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-05 01:14:05.665192 | orchestrator | Thursday 05 March 2026 01:13:07 +0000 (0:00:02.339) 0:01:10.468 ******** 2026-03-05 01:14:05.665208 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:14:05.665219 | orchestrator | 2026-03-05 01:14:05.665231 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-05 01:14:05.665241 | orchestrator | Thursday 05 March 2026 01:13:26 +0000 (0:00:18.531) 0:01:28.999 ******** 2026-03-05 01:14:05.665251 | orchestrator | 2026-03-05 01:14:05.665261 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-05 01:14:05.665271 | orchestrator | Thursday 05 March 2026 01:13:26 +0000 (0:00:00.076) 0:01:29.075 ******** 2026-03-05 01:14:05.665281 | orchestrator | 2026-03-05 01:14:05.665291 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-05 01:14:05.665301 | orchestrator | Thursday 05 March 2026 01:13:26 +0000 (0:00:00.067) 0:01:29.143 ******** 2026-03-05 01:14:05.665320 | orchestrator | 2026-03-05 01:14:05.665330 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-05 01:14:05.665339 | orchestrator | Thursday 05 March 2026 01:13:26 +0000 (0:00:00.080) 0:01:29.223 ******** 2026-03-05 01:14:05.665348 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:14:05.665359 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:14:05.665368 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:14:05.665376 | orchestrator | 2026-03-05 01:14:05.665385 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-05 01:14:05.665395 | orchestrator | Thursday 05 March 2026 01:13:43 +0000 (0:00:16.503) 0:01:45.727 ******** 2026-03-05 01:14:05.665405 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:14:05.665414 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:14:05.665422 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:14:05.665431 | orchestrator | 2026-03-05 01:14:05.665440 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:14:05.665450 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-05 01:14:05.665461 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-05 01:14:05.665471 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-05 01:14:05.665481 | orchestrator | 2026-03-05 01:14:05.665490 | orchestrator | 2026-03-05 01:14:05.665499 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:14:05.665507 | orchestrator | Thursday 05 March 2026 01:14:02 +0000 (0:00:18.988) 0:02:04.716 ******** 2026-03-05 01:14:05.665515 | orchestrator | =============================================================================== 2026-03-05 01:14:05.665522 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 18.99s 2026-03-05 01:14:05.665531 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 18.53s 2026-03-05 01:14:05.665539 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 16.50s 2026-03-05 01:14:05.665548 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.20s 2026-03-05 01:14:05.665556 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.48s 2026-03-05 01:14:05.665563 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.80s 2026-03-05 01:14:05.665571 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.61s 2026-03-05 01:14:05.665580 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.27s 2026-03-05 01:14:05.665588 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 4.15s 2026-03-05 01:14:05.665596 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.90s 2026-03-05 01:14:05.665604 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.80s 2026-03-05 01:14:05.665611 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.78s 2026-03-05 01:14:05.665619 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.70s 2026-03-05 01:14:05.665627 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.71s 2026-03-05 01:14:05.665635 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.64s 2026-03-05 01:14:05.665643 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.46s 2026-03-05 01:14:05.665651 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.45s 2026-03-05 01:14:05.665659 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.34s 2026-03-05 01:14:05.665667 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.30s 2026-03-05 01:14:05.665685 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.85s 2026-03-05 01:14:05.665695 | orchestrator | 2026-03-05 01:14:05 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:14:05.665856 | orchestrator | 2026-03-05 01:14:05 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:14:05.665874 | orchestrator | 2026-03-05 01:14:05 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:08.729455 | orchestrator | 2026-03-05 01:14:08 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:14:08.732098 | orchestrator | 2026-03-05 01:14:08 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:14:08.733012 | orchestrator | 2026-03-05 01:14:08 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:14:08.733035 | orchestrator | 2026-03-05 01:14:08 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:11.772116 | orchestrator | 2026-03-05 01:14:11 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:14:11.772658 | orchestrator | 2026-03-05 01:14:11 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:14:11.773898 | orchestrator | 2026-03-05 01:14:11 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:14:11.773928 | orchestrator | 2026-03-05 01:14:11 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:14.812634 | orchestrator | 2026-03-05 01:14:14 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state STARTED 2026-03-05 01:14:14.813338 | orchestrator | 2026-03-05 01:14:14 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:14:14.815151 | orchestrator | 2026-03-05 01:14:14 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:14:14.815200 | orchestrator | 2026-03-05 01:14:14 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:17.854253 | orchestrator | 2026-03-05 01:14:17 | INFO  | Task bad745dc-6652-483b-b4c6-4cf5575a6b78 is in state SUCCESS 2026-03-05 01:14:17.855154 | orchestrator | 2026-03-05 01:14:17.855195 | orchestrator | 2026-03-05 01:14:17.855204 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:14:17.855214 | orchestrator | 2026-03-05 01:14:17.855220 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:14:17.855228 | orchestrator | Thursday 05 March 2026 01:12:02 +0000 (0:00:00.319) 0:00:00.319 ******** 2026-03-05 01:14:17.855234 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:14:17.855243 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:14:17.855250 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:14:17.855257 | orchestrator | 2026-03-05 01:14:17.855264 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:14:17.855271 | orchestrator | Thursday 05 March 2026 01:12:02 +0000 (0:00:00.360) 0:00:00.680 ******** 2026-03-05 01:14:17.855278 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-05 01:14:17.855286 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-05 01:14:17.855293 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-05 01:14:17.855301 | orchestrator | 2026-03-05 01:14:17.855307 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-05 01:14:17.855314 | orchestrator | 2026-03-05 01:14:17.855323 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-05 01:14:17.855331 | orchestrator | Thursday 05 March 2026 01:12:02 +0000 (0:00:00.479) 0:00:01.160 ******** 2026-03-05 01:14:17.855338 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:14:17.855346 | orchestrator | 2026-03-05 01:14:17.855353 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-05 01:14:17.855386 | orchestrator | Thursday 05 March 2026 01:12:03 +0000 (0:00:00.565) 0:00:01.726 ******** 2026-03-05 01:14:17.855398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:14:17.855408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:14:17.855417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:14:17.855424 | orchestrator | 2026-03-05 01:14:17.855431 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-05 01:14:17.855438 | orchestrator | Thursday 05 March 2026 01:12:04 +0000 (0:00:00.780) 0:00:02.506 ******** 2026-03-05 01:14:17.855445 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-05 01:14:17.855453 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-05 01:14:17.855460 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 01:14:17.855466 | orchestrator | 2026-03-05 01:14:17.855473 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-05 01:14:17.855480 | orchestrator | Thursday 05 March 2026 01:12:05 +0000 (0:00:01.119) 0:00:03.626 ******** 2026-03-05 01:14:17.855486 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:14:17.855493 | orchestrator | 2026-03-05 01:14:17.855509 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-05 01:14:17.855516 | orchestrator | Thursday 05 March 2026 01:12:06 +0000 (0:00:00.802) 0:00:04.429 ******** 2026-03-05 01:14:17.855535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:14:17.855552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:14:17.855560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:14:17.855567 | orchestrator | 2026-03-05 01:14:17.855574 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-05 01:14:17.855581 | orchestrator | Thursday 05 March 2026 01:12:07 +0000 (0:00:01.559) 0:00:05.988 ******** 2026-03-05 01:14:17.855590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-05 01:14:17.855598 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:14:17.855606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-05 01:14:17.855614 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:14:17.855625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-05 01:14:17.855632 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:14:17.855639 | orchestrator | 2026-03-05 01:14:17.855645 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-05 01:14:17.855658 | orchestrator | Thursday 05 March 2026 01:12:08 +0000 (0:00:00.448) 0:00:06.437 ******** 2026-03-05 01:14:17.855665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-05 01:14:17.855673 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:14:17.855680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-05 01:14:17.855688 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:14:17.855695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-05 01:14:17.855702 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:14:17.855710 | orchestrator | 2026-03-05 01:14:17.855717 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-05 01:14:17.855725 | orchestrator | Thursday 05 March 2026 01:12:09 +0000 (0:00:01.010) 0:00:07.448 ******** 2026-03-05 01:14:17.855733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:14:17.855821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:14:17.855999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:14:17.856008 | orchestrator | 2026-03-05 01:14:17.856016 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-05 01:14:17.856045 | orchestrator | Thursday 05 March 2026 01:12:10 +0000 (0:00:01.250) 0:00:08.699 ******** 2026-03-05 01:14:17.856052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:14:17.856060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:14:17.856067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:14:17.856075 | orchestrator | 2026-03-05 01:14:17.856083 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-05 01:14:17.856089 | orchestrator | Thursday 05 March 2026 01:12:12 +0000 (0:00:01.491) 0:00:10.191 ******** 2026-03-05 01:14:17.856096 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:14:17.856104 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:14:17.856110 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:14:17.856117 | orchestrator | 2026-03-05 01:14:17.856124 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-05 01:14:17.856131 | orchestrator | Thursday 05 March 2026 01:12:12 +0000 (0:00:00.576) 0:00:10.767 ******** 2026-03-05 01:14:17.856138 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-05 01:14:17.856148 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-05 01:14:17.856156 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-05 01:14:17.856170 | orchestrator | 2026-03-05 01:14:17.856178 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-05 01:14:17.856187 | orchestrator | Thursday 05 March 2026 01:12:13 +0000 (0:00:01.221) 0:00:11.989 ******** 2026-03-05 01:14:17.856197 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-05 01:14:17.856206 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-05 01:14:17.856215 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-05 01:14:17.856225 | orchestrator | 2026-03-05 01:14:17.856234 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-05 01:14:17.856243 | orchestrator | Thursday 05 March 2026 01:12:15 +0000 (0:00:01.273) 0:00:13.263 ******** 2026-03-05 01:14:17.856258 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 01:14:17.856267 | orchestrator | 2026-03-05 01:14:17.856274 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-05 01:14:17.856281 | orchestrator | Thursday 05 March 2026 01:12:16 +0000 (0:00:00.916) 0:00:14.179 ******** 2026-03-05 01:14:17.856290 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-05 01:14:17.856298 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-05 01:14:17.856306 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:14:17.856313 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:14:17.856320 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:14:17.856328 | orchestrator | 2026-03-05 01:14:17.856335 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-05 01:14:17.856342 | orchestrator | Thursday 05 March 2026 01:12:16 +0000 (0:00:00.771) 0:00:14.950 ******** 2026-03-05 01:14:17.856350 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:14:17.856357 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:14:17.856365 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:14:17.856372 | orchestrator | 2026-03-05 01:14:17.856379 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-05 01:14:17.856387 | orchestrator | Thursday 05 March 2026 01:12:17 +0000 (0:00:00.649) 0:00:15.600 ******** 2026-03-05 01:14:17.856396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1096895, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9010618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1096895, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9010618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1096895, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9010618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1096911, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.912062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1096911, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.912062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1096911, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.912062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1096936, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9254758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1096936, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9254758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1096936, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9254758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1096906, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.909062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1096906, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.909062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1096906, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.909062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1096937, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9270623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1096937, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9270623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1096937, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9270623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1096902, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.906062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1096902, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.906062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1096902, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.906062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1096922, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.915062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1096922, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.915062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1096922, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.915062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1096929, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9210622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1096929, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9210622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1096929, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9210622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1096893, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.8999677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1096893, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.8999677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1096893, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.8999677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1096897, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9040618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1096897, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9040618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.856989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1096897, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9040618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1096909, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9108086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1096909, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9108086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1096909, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9108086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1096924, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.917062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1096924, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.917062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1096924, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.917062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1096935, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9240623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1096935, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9240623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1096935, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9240623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1096905, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.908062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1096905, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.908062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1096905, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.908062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1096927, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9190621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1096927, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9190621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1096927, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9190621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1096942, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9280622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1096942, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9280622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1096942, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9280622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1096923, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.916062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1096923, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.916062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1096923, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.916062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1096921, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9145575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1096921, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9145575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1096921, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9145575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1096919, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9145575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1096919, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9145575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1096919, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9145575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1096925, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9190621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1096925, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9190621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1096925, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9190621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1096915, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.913062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1096915, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.913062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1096915, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.913062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1096932, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9240623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1096932, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9240623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1096932, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9240623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1096904, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9070618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1096904, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9070618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1096904, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9070618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1097012, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9647245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1097012, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9647245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1097012, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9647245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096961, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9400625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096961, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9400625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096953, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9320009, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096961, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9400625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096953, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9320009, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1096971, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.944699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096953, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9320009, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1096971, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.944699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1096944, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9298897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1096971, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.944699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1096944, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9298897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096992, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9558926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1096944, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9298897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096992, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9558926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096974, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9523578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096974, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9523578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096992, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9558926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1096996, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9562855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1096996, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9562855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096974, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9523578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1097008, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9643316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1097008, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9643316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1096996, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9562855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1096990, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.954507, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1096990, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.954507, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1097008, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9643316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096965, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9420626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096965, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9420626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1096990, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.954507, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096957, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9377277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096957, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9377277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096965, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9420626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096963, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9410625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096963, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9410625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096957, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9377277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096954, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9350624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096954, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9350624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096963, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9410625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1096968, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9438317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1096968, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9438317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096954, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9350624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1097003, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9620628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1097003, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9620628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1096968, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9438317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1097000, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9588923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1097000, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9588923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096946, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9302669, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1097003, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9620628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096946, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9302669, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096948, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9318666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1097000, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9588923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096948, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9318666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096985, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.953847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096946, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9302669, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096985, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.953847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1096998, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9573858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1096998, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9573858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096948, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9318666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096985, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.953847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1096998, 'dev': 78, 'nlink': 1, 'atime': 1772668938.0, 'mtime': 1772668938.0, 'ctime': 1772669792.9573858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:14:17.857881 | orchestrator | 2026-03-05 01:14:17.857886 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-05 01:14:17.857891 | orchestrator | Thursday 05 March 2026 01:13:01 +0000 (0:00:44.297) 0:00:59.898 ******** 2026-03-05 01:14:17.857895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:14:17.857900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:14:17.857905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:14:17.857913 | orchestrator | 2026-03-05 01:14:17.857917 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-05 01:14:17.857922 | orchestrator | Thursday 05 March 2026 01:13:02 +0000 (0:00:01.091) 0:01:00.989 ******** 2026-03-05 01:14:17.857926 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:14:17.857931 | orchestrator | 2026-03-05 01:14:17.857935 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-05 01:14:17.857940 | orchestrator | Thursday 05 March 2026 01:13:05 +0000 (0:00:02.470) 0:01:03.459 ******** 2026-03-05 01:14:17.857944 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:14:17.857949 | orchestrator | 2026-03-05 01:14:17.857953 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-05 01:14:17.857958 | orchestrator | Thursday 05 March 2026 01:13:07 +0000 (0:00:02.444) 0:01:05.904 ******** 2026-03-05 01:14:17.857962 | orchestrator | 2026-03-05 01:14:17.857966 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-05 01:14:17.857971 | orchestrator | Thursday 05 March 2026 01:13:07 +0000 (0:00:00.083) 0:01:05.987 ******** 2026-03-05 01:14:17.857975 | orchestrator | 2026-03-05 01:14:17.857980 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-05 01:14:17.857984 | orchestrator | Thursday 05 March 2026 01:13:08 +0000 (0:00:00.314) 0:01:06.301 ******** 2026-03-05 01:14:17.857988 | orchestrator | 2026-03-05 01:14:17.857993 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-05 01:14:17.857997 | orchestrator | Thursday 05 March 2026 01:13:08 +0000 (0:00:00.090) 0:01:06.392 ******** 2026-03-05 01:14:17.858002 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:14:17.858006 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:14:17.858011 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:14:17.858053 | orchestrator | 2026-03-05 01:14:17.858057 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-05 01:14:17.858065 | orchestrator | Thursday 05 March 2026 01:13:10 +0000 (0:00:01.819) 0:01:08.212 ******** 2026-03-05 01:14:17.858073 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:14:17.858080 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:14:17.858087 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-05 01:14:17.858097 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-05 01:14:17.858107 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:14:17.858117 | orchestrator | 2026-03-05 01:14:17.858124 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-05 01:14:17.858131 | orchestrator | Thursday 05 March 2026 01:13:37 +0000 (0:00:27.423) 0:01:35.635 ******** 2026-03-05 01:14:17.858139 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:14:17.858146 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:14:17.858153 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:14:17.858161 | orchestrator | 2026-03-05 01:14:17.858168 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-05 01:14:17.858175 | orchestrator | Thursday 05 March 2026 01:14:09 +0000 (0:00:31.921) 0:02:07.557 ******** 2026-03-05 01:14:17.858181 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:14:17.858188 | orchestrator | 2026-03-05 01:14:17.858195 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-05 01:14:17.858202 | orchestrator | Thursday 05 March 2026 01:14:12 +0000 (0:00:02.697) 0:02:10.255 ******** 2026-03-05 01:14:17.858209 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:14:17.858217 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:14:17.858224 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:14:17.858232 | orchestrator | 2026-03-05 01:14:17.858240 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-05 01:14:17.858255 | orchestrator | Thursday 05 March 2026 01:14:13 +0000 (0:00:01.162) 0:02:11.417 ******** 2026-03-05 01:14:17.858265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-05 01:14:17.858275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-05 01:14:17.858284 | orchestrator | 2026-03-05 01:14:17.858289 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-05 01:14:17.858293 | orchestrator | Thursday 05 March 2026 01:14:16 +0000 (0:00:02.917) 0:02:14.334 ******** 2026-03-05 01:14:17.858298 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:14:17.858302 | orchestrator | 2026-03-05 01:14:17.858306 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:14:17.858311 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 01:14:17.858317 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 01:14:17.858322 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 01:14:17.858326 | orchestrator | 2026-03-05 01:14:17.858330 | orchestrator | 2026-03-05 01:14:17.858335 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:14:17.858339 | orchestrator | Thursday 05 March 2026 01:14:16 +0000 (0:00:00.547) 0:02:14.881 ******** 2026-03-05 01:14:17.858344 | orchestrator | =============================================================================== 2026-03-05 01:14:17.858348 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 44.30s 2026-03-05 01:14:17.858353 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 31.92s 2026-03-05 01:14:17.858357 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 27.42s 2026-03-05 01:14:17.858361 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.92s 2026-03-05 01:14:17.858365 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.70s 2026-03-05 01:14:17.858370 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.47s 2026-03-05 01:14:17.858374 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.44s 2026-03-05 01:14:17.858378 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.82s 2026-03-05 01:14:17.858382 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.56s 2026-03-05 01:14:17.858387 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.49s 2026-03-05 01:14:17.858391 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.27s 2026-03-05 01:14:17.858395 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.25s 2026-03-05 01:14:17.858400 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.22s 2026-03-05 01:14:17.858408 | orchestrator | grafana : Remove old grafana docker volume ------------------------------ 1.16s 2026-03-05 01:14:17.858412 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.12s 2026-03-05 01:14:17.858416 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.09s 2026-03-05 01:14:17.858421 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.01s 2026-03-05 01:14:17.858429 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.92s 2026-03-05 01:14:17.858434 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.80s 2026-03-05 01:14:17.858438 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.78s 2026-03-05 01:14:17.858442 | orchestrator | 2026-03-05 01:14:17 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:14:17.858447 | orchestrator | 2026-03-05 01:14:17 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:14:17.858452 | orchestrator | 2026-03-05 01:14:17 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:20.900496 | orchestrator | 2026-03-05 01:14:20 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:14:20.900574 | orchestrator | 2026-03-05 01:14:20 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:14:20.900581 | orchestrator | 2026-03-05 01:14:20 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:23.949866 | orchestrator | 2026-03-05 01:14:23 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:14:23.950721 | orchestrator | 2026-03-05 01:14:23 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:14:23.950782 | orchestrator | 2026-03-05 01:14:23 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:26.994236 | orchestrator | 2026-03-05 01:14:26 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:14:26.995628 | orchestrator | 2026-03-05 01:14:26 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:14:26.995683 | orchestrator | 2026-03-05 01:14:26 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:30.037985 | orchestrator | 2026-03-05 01:14:30 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:14:30.038669 | orchestrator | 2026-03-05 01:14:30 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:14:30.038814 | orchestrator | 2026-03-05 01:14:30 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:33.083108 | orchestrator | 2026-03-05 01:14:33 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:14:33.084541 | orchestrator | 2026-03-05 01:14:33 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:14:33.084597 | orchestrator | 2026-03-05 01:14:33 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:36.120302 | orchestrator | 2026-03-05 01:14:36 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:14:36.121182 | orchestrator | 2026-03-05 01:14:36 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:14:36.121202 | orchestrator | 2026-03-05 01:14:36 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:39.159570 | orchestrator | 2026-03-05 01:14:39 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:14:39.162501 | orchestrator | 2026-03-05 01:14:39 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:14:39.162576 | orchestrator | 2026-03-05 01:14:39 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:42.214191 | orchestrator | 2026-03-05 01:14:42 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:14:42.215070 | orchestrator | 2026-03-05 01:14:42 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:14:42.215128 | orchestrator | 2026-03-05 01:14:42 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:45.275306 | orchestrator | 2026-03-05 01:14:45 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:14:45.277939 | orchestrator | 2026-03-05 01:14:45 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:14:45.278045 | orchestrator | 2026-03-05 01:14:45 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:48.314478 | orchestrator | 2026-03-05 01:14:48 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:14:48.316904 | orchestrator | 2026-03-05 01:14:48 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:14:48.316971 | orchestrator | 2026-03-05 01:14:48 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:51.354886 | orchestrator | 2026-03-05 01:14:51 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:14:51.356315 | orchestrator | 2026-03-05 01:14:51 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:14:51.356362 | orchestrator | 2026-03-05 01:14:51 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:54.387460 | orchestrator | 2026-03-05 01:14:54 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:14:54.387534 | orchestrator | 2026-03-05 01:14:54 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:14:54.387541 | orchestrator | 2026-03-05 01:14:54 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:57.423234 | orchestrator | 2026-03-05 01:14:57 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:14:57.424085 | orchestrator | 2026-03-05 01:14:57 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:14:57.424139 | orchestrator | 2026-03-05 01:14:57 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:15:00.469292 | orchestrator | 2026-03-05 01:15:00 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state STARTED 2026-03-05 01:15:00.470679 | orchestrator | 2026-03-05 01:15:00 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:15:00.470907 | orchestrator | 2026-03-05 01:15:00 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:15:03.514894 | orchestrator | 2026-03-05 01:15:03 | INFO  | Task 4e20f752-f1f4-4314-9624-dbf852d17094 is in state SUCCESS 2026-03-05 01:15:03.517181 | orchestrator | 2026-03-05 01:15:03.517295 | orchestrator | 2026-03-05 01:15:03.517310 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:15:03.517329 | orchestrator | 2026-03-05 01:15:03.517345 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-05 01:15:03.517362 | orchestrator | Thursday 05 March 2026 01:03:21 +0000 (0:00:00.339) 0:00:00.339 ******** 2026-03-05 01:15:03.517378 | orchestrator | changed: [testbed-manager] 2026-03-05 01:15:03.517396 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:15:03.517413 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:15:03.517428 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:15:03.517439 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:15:03.517448 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:15:03.517458 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:15:03.517468 | orchestrator | 2026-03-05 01:15:03.517484 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:15:03.517499 | orchestrator | Thursday 05 March 2026 01:03:22 +0000 (0:00:01.479) 0:00:01.818 ******** 2026-03-05 01:15:03.517517 | orchestrator | changed: [testbed-manager] 2026-03-05 01:15:03.517534 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:15:03.517550 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:15:03.517567 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:15:03.517618 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:15:03.517637 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:15:03.517647 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:15:03.517656 | orchestrator | 2026-03-05 01:15:03.517667 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:15:03.517676 | orchestrator | Thursday 05 March 2026 01:03:24 +0000 (0:00:01.684) 0:00:03.503 ******** 2026-03-05 01:15:03.517689 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-05 01:15:03.517731 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-05 01:15:03.518271 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-05 01:15:03.518296 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-05 01:15:03.518306 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-05 01:15:03.518315 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-05 01:15:03.518325 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-05 01:15:03.518334 | orchestrator | 2026-03-05 01:15:03.518345 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-05 01:15:03.518364 | orchestrator | 2026-03-05 01:15:03.518469 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-05 01:15:03.518485 | orchestrator | Thursday 05 March 2026 01:03:26 +0000 (0:00:02.296) 0:00:05.799 ******** 2026-03-05 01:15:03.518520 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:15:03.518530 | orchestrator | 2026-03-05 01:15:03.518541 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-05 01:15:03.518551 | orchestrator | Thursday 05 March 2026 01:03:28 +0000 (0:00:01.663) 0:00:07.463 ******** 2026-03-05 01:15:03.518562 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-05 01:15:03.518573 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-05 01:15:03.518583 | orchestrator | 2026-03-05 01:15:03.518594 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-05 01:15:03.518610 | orchestrator | Thursday 05 March 2026 01:03:33 +0000 (0:00:04.714) 0:00:12.177 ******** 2026-03-05 01:15:03.518631 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-05 01:15:03.518738 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-05 01:15:03.518755 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:15:03.518773 | orchestrator | 2026-03-05 01:15:03.518822 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-05 01:15:03.518839 | orchestrator | Thursday 05 March 2026 01:03:38 +0000 (0:00:05.150) 0:00:17.328 ******** 2026-03-05 01:15:03.518855 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:15:03.518884 | orchestrator | 2026-03-05 01:15:03.519088 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-05 01:15:03.519111 | orchestrator | Thursday 05 March 2026 01:03:39 +0000 (0:00:01.321) 0:00:18.650 ******** 2026-03-05 01:15:03.519131 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:15:03.519150 | orchestrator | 2026-03-05 01:15:03.519170 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-05 01:15:03.519189 | orchestrator | Thursday 05 March 2026 01:03:42 +0000 (0:00:02.600) 0:00:21.250 ******** 2026-03-05 01:15:03.519207 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:15:03.519226 | orchestrator | 2026-03-05 01:15:03.519244 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-05 01:15:03.519262 | orchestrator | Thursday 05 March 2026 01:03:46 +0000 (0:00:03.960) 0:00:25.210 ******** 2026-03-05 01:15:03.519280 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.519300 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.519317 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.519357 | orchestrator | 2026-03-05 01:15:03.519375 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-05 01:15:03.519391 | orchestrator | Thursday 05 March 2026 01:03:46 +0000 (0:00:00.735) 0:00:25.946 ******** 2026-03-05 01:15:03.519443 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:15:03.519456 | orchestrator | 2026-03-05 01:15:03.519466 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-05 01:15:03.519476 | orchestrator | Thursday 05 March 2026 01:04:19 +0000 (0:00:33.056) 0:00:59.003 ******** 2026-03-05 01:15:03.519485 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:15:03.519551 | orchestrator | 2026-03-05 01:15:03.519570 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-05 01:15:03.519583 | orchestrator | Thursday 05 March 2026 01:04:38 +0000 (0:00:18.106) 0:01:17.109 ******** 2026-03-05 01:15:03.519593 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:15:03.519602 | orchestrator | 2026-03-05 01:15:03.519612 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-05 01:15:03.519623 | orchestrator | Thursday 05 March 2026 01:04:52 +0000 (0:00:14.085) 0:01:31.195 ******** 2026-03-05 01:15:03.519670 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:15:03.519682 | orchestrator | 2026-03-05 01:15:03.519693 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-05 01:15:03.519820 | orchestrator | Thursday 05 March 2026 01:04:53 +0000 (0:00:01.424) 0:01:32.619 ******** 2026-03-05 01:15:03.519836 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.519852 | orchestrator | 2026-03-05 01:15:03.519883 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-05 01:15:03.519892 | orchestrator | Thursday 05 March 2026 01:04:54 +0000 (0:00:00.581) 0:01:33.201 ******** 2026-03-05 01:15:03.519901 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:15:03.519909 | orchestrator | 2026-03-05 01:15:03.519919 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-05 01:15:03.519933 | orchestrator | Thursday 05 March 2026 01:04:54 +0000 (0:00:00.851) 0:01:34.052 ******** 2026-03-05 01:15:03.519946 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:15:03.519959 | orchestrator | 2026-03-05 01:15:03.519973 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-05 01:15:03.519987 | orchestrator | Thursday 05 March 2026 01:05:17 +0000 (0:00:22.507) 0:01:56.559 ******** 2026-03-05 01:15:03.519996 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.520004 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.520012 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.520020 | orchestrator | 2026-03-05 01:15:03.520028 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-05 01:15:03.520036 | orchestrator | 2026-03-05 01:15:03.520045 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-05 01:15:03.520052 | orchestrator | Thursday 05 March 2026 01:05:18 +0000 (0:00:00.544) 0:01:57.103 ******** 2026-03-05 01:15:03.520061 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:15:03.520068 | orchestrator | 2026-03-05 01:15:03.520077 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-05 01:15:03.520084 | orchestrator | Thursday 05 March 2026 01:05:19 +0000 (0:00:01.384) 0:01:58.488 ******** 2026-03-05 01:15:03.520092 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.520105 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.520117 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:15:03.520130 | orchestrator | 2026-03-05 01:15:03.520143 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-05 01:15:03.520158 | orchestrator | Thursday 05 March 2026 01:05:21 +0000 (0:00:02.477) 0:02:00.965 ******** 2026-03-05 01:15:03.520172 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.520186 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.520199 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:15:03.520211 | orchestrator | 2026-03-05 01:15:03.520225 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-05 01:15:03.520253 | orchestrator | Thursday 05 March 2026 01:05:24 +0000 (0:00:02.541) 0:02:03.507 ******** 2026-03-05 01:15:03.520267 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.520282 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.520290 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.520298 | orchestrator | 2026-03-05 01:15:03.520310 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-05 01:15:03.520323 | orchestrator | Thursday 05 March 2026 01:05:24 +0000 (0:00:00.362) 0:02:03.870 ******** 2026-03-05 01:15:03.520337 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-05 01:15:03.520351 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.520361 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-05 01:15:03.520374 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.520387 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-05 01:15:03.520402 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-05 01:15:03.520416 | orchestrator | 2026-03-05 01:15:03.520429 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-05 01:15:03.520443 | orchestrator | Thursday 05 March 2026 01:05:35 +0000 (0:00:10.852) 0:02:14.723 ******** 2026-03-05 01:15:03.520457 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.520472 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.520485 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.520499 | orchestrator | 2026-03-05 01:15:03.520512 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-05 01:15:03.520525 | orchestrator | Thursday 05 March 2026 01:05:37 +0000 (0:00:01.835) 0:02:16.559 ******** 2026-03-05 01:15:03.520534 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-05 01:15:03.520541 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.520549 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-05 01:15:03.520557 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.520565 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-05 01:15:03.520573 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.520581 | orchestrator | 2026-03-05 01:15:03.520589 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-05 01:15:03.520597 | orchestrator | Thursday 05 March 2026 01:05:39 +0000 (0:00:01.805) 0:02:18.364 ******** 2026-03-05 01:15:03.520605 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.520613 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:15:03.520621 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.520629 | orchestrator | 2026-03-05 01:15:03.520637 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-05 01:15:03.520645 | orchestrator | Thursday 05 March 2026 01:05:40 +0000 (0:00:00.825) 0:02:19.190 ******** 2026-03-05 01:15:03.520653 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.520661 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.520669 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:15:03.520677 | orchestrator | 2026-03-05 01:15:03.520685 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-05 01:15:03.520717 | orchestrator | Thursday 05 March 2026 01:05:41 +0000 (0:00:01.319) 0:02:20.510 ******** 2026-03-05 01:15:03.520732 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.520747 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.520772 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:15:03.520780 | orchestrator | 2026-03-05 01:15:03.520788 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-05 01:15:03.520796 | orchestrator | Thursday 05 March 2026 01:05:44 +0000 (0:00:03.128) 0:02:23.638 ******** 2026-03-05 01:15:03.520804 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.520811 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.520819 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:15:03.520827 | orchestrator | 2026-03-05 01:15:03.520835 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-05 01:15:03.520852 | orchestrator | Thursday 05 March 2026 01:06:08 +0000 (0:00:24.418) 0:02:48.057 ******** 2026-03-05 01:15:03.520860 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.520868 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.520876 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:15:03.520884 | orchestrator | 2026-03-05 01:15:03.520891 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-05 01:15:03.520899 | orchestrator | Thursday 05 March 2026 01:06:23 +0000 (0:00:14.228) 0:03:02.285 ******** 2026-03-05 01:15:03.520907 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:15:03.520915 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.520923 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.520931 | orchestrator | 2026-03-05 01:15:03.520939 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-05 01:15:03.520946 | orchestrator | Thursday 05 March 2026 01:06:24 +0000 (0:00:01.223) 0:03:03.508 ******** 2026-03-05 01:15:03.520954 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.520962 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.520970 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:15:03.520978 | orchestrator | 2026-03-05 01:15:03.520986 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-05 01:15:03.520994 | orchestrator | Thursday 05 March 2026 01:06:38 +0000 (0:00:14.371) 0:03:17.880 ******** 2026-03-05 01:15:03.521002 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.521010 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.521017 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.521025 | orchestrator | 2026-03-05 01:15:03.521033 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-05 01:15:03.521041 | orchestrator | Thursday 05 March 2026 01:06:40 +0000 (0:00:01.325) 0:03:19.205 ******** 2026-03-05 01:15:03.521049 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.521057 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.521065 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.521073 | orchestrator | 2026-03-05 01:15:03.521081 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-05 01:15:03.521089 | orchestrator | 2026-03-05 01:15:03.521097 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-05 01:15:03.521105 | orchestrator | Thursday 05 March 2026 01:06:40 +0000 (0:00:00.643) 0:03:19.848 ******** 2026-03-05 01:15:03.521113 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:15:03.521125 | orchestrator | 2026-03-05 01:15:03.521138 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-05 01:15:03.521149 | orchestrator | Thursday 05 March 2026 01:06:41 +0000 (0:00:00.625) 0:03:20.474 ******** 2026-03-05 01:15:03.521160 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-05 01:15:03.521176 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-05 01:15:03.521196 | orchestrator | 2026-03-05 01:15:03.521208 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-05 01:15:03.521222 | orchestrator | Thursday 05 March 2026 01:06:45 +0000 (0:00:03.856) 0:03:24.330 ******** 2026-03-05 01:15:03.521235 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-05 01:15:03.521251 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-05 01:15:03.521263 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-05 01:15:03.521275 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-05 01:15:03.521287 | orchestrator | 2026-03-05 01:15:03.521299 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-05 01:15:03.521322 | orchestrator | Thursday 05 March 2026 01:06:52 +0000 (0:00:07.516) 0:03:31.847 ******** 2026-03-05 01:15:03.521336 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-05 01:15:03.521348 | orchestrator | 2026-03-05 01:15:03.521360 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-05 01:15:03.521371 | orchestrator | Thursday 05 March 2026 01:06:56 +0000 (0:00:03.699) 0:03:35.547 ******** 2026-03-05 01:15:03.521385 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-05 01:15:03.521399 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-05 01:15:03.521412 | orchestrator | 2026-03-05 01:15:03.521426 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-05 01:15:03.521439 | orchestrator | Thursday 05 March 2026 01:07:00 +0000 (0:00:04.408) 0:03:39.955 ******** 2026-03-05 01:15:03.521452 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-05 01:15:03.521466 | orchestrator | 2026-03-05 01:15:03.521479 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-05 01:15:03.521492 | orchestrator | Thursday 05 March 2026 01:07:04 +0000 (0:00:03.363) 0:03:43.319 ******** 2026-03-05 01:15:03.521505 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-05 01:15:03.521519 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-05 01:15:03.521532 | orchestrator | 2026-03-05 01:15:03.521544 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-05 01:15:03.521569 | orchestrator | Thursday 05 March 2026 01:07:11 +0000 (0:00:07.382) 0:03:50.702 ******** 2026-03-05 01:15:03.521590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:15:03.521610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:15:03.521639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:15:03.521668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.521685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.521727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.521741 | orchestrator | 2026-03-05 01:15:03.521753 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-05 01:15:03.521766 | orchestrator | Thursday 05 March 2026 01:07:13 +0000 (0:00:01.773) 0:03:52.475 ******** 2026-03-05 01:15:03.521779 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.521793 | orchestrator | 2026-03-05 01:15:03.521806 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-05 01:15:03.521820 | orchestrator | Thursday 05 March 2026 01:07:13 +0000 (0:00:00.362) 0:03:52.838 ******** 2026-03-05 01:15:03.521833 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.521847 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.521861 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.521873 | orchestrator | 2026-03-05 01:15:03.521886 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-05 01:15:03.521898 | orchestrator | Thursday 05 March 2026 01:07:16 +0000 (0:00:02.544) 0:03:55.382 ******** 2026-03-05 01:15:03.521922 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 01:15:03.521936 | orchestrator | 2026-03-05 01:15:03.521948 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-05 01:15:03.521962 | orchestrator | Thursday 05 March 2026 01:07:17 +0000 (0:00:01.513) 0:03:56.895 ******** 2026-03-05 01:15:03.521975 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.521988 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.522002 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.522071 | orchestrator | 2026-03-05 01:15:03.522090 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-05 01:15:03.522104 | orchestrator | Thursday 05 March 2026 01:07:18 +0000 (0:00:00.798) 0:03:57.694 ******** 2026-03-05 01:15:03.522117 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:15:03.522131 | orchestrator | 2026-03-05 01:15:03.522143 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-05 01:15:03.522157 | orchestrator | Thursday 05 March 2026 01:07:20 +0000 (0:00:01.984) 0:03:59.679 ******** 2026-03-05 01:15:03.522170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:15:03.522200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.522217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:15:03.522243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:15:03.522260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.522282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.522296 | orchestrator | 2026-03-05 01:15:03.522310 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-05 01:15:03.522319 | orchestrator | Thursday 05 March 2026 01:07:25 +0000 (0:00:05.165) 0:04:04.844 ******** 2026-03-05 01:15:03.522328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-05 01:15:03.522343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.522352 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.522361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-05 01:15:03.522370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.522378 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.522394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-05 01:15:03.522404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.522417 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.522425 | orchestrator | 2026-03-05 01:15:03.522433 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-05 01:15:03.522441 | orchestrator | Thursday 05 March 2026 01:07:27 +0000 (0:00:01.650) 0:04:06.495 ******** 2026-03-05 01:15:03.522450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-05 01:15:03.522459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.522467 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.522482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-05 01:15:03.522497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.522510 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.522524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-05 01:15:03.522537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.522550 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.522563 | orchestrator | 2026-03-05 01:15:03.522576 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-05 01:15:03.522591 | orchestrator | Thursday 05 March 2026 01:07:28 +0000 (0:00:01.079) 0:04:07.574 ******** 2026-03-05 01:15:03.522615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:15:03.522632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:15:03.522641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.522650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.522665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:15:03.522680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.522737 | orchestrator | 2026-03-05 01:15:03.522751 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-05 01:15:03.522764 | orchestrator | Thursday 05 March 2026 01:07:33 +0000 (0:00:04.916) 0:04:12.491 ******** 2026-03-05 01:15:03.522800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:15:03.522816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:15:03.522859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:15:03.522890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.522906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.522921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.522936 | orchestrator | 2026-03-05 01:15:03.522949 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-05 01:15:03.522963 | orchestrator | Thursday 05 March 2026 01:07:46 +0000 (0:00:13.460) 0:04:25.951 ******** 2026-03-05 01:15:03.522978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-05 01:15:03.523002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.523018 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.523032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-05 01:15:03.523041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.523050 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.523058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-05 01:15:03.523141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.523150 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.523158 | orchestrator | 2026-03-05 01:15:03.523166 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-05 01:15:03.523181 | orchestrator | Thursday 05 March 2026 01:07:49 +0000 (0:00:02.284) 0:04:28.236 ******** 2026-03-05 01:15:03.523189 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:15:03.523198 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:15:03.523206 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:15:03.523214 | orchestrator | 2026-03-05 01:15:03.523227 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-05 01:15:03.523236 | orchestrator | Thursday 05 March 2026 01:07:53 +0000 (0:00:04.332) 0:04:32.568 ******** 2026-03-05 01:15:03.523243 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.523251 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.523259 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.523267 | orchestrator | 2026-03-05 01:15:03.523275 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-05 01:15:03.523283 | orchestrator | Thursday 05 March 2026 01:07:54 +0000 (0:00:01.234) 0:04:33.803 ******** 2026-03-05 01:15:03.523297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:15:03.523306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:15:03.523316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.523336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:15:03.523350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.523373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.523382 | orchestrator | 2026-03-05 01:15:03.523391 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-05 01:15:03.523399 | orchestrator | Thursday 05 March 2026 01:07:59 +0000 (0:00:04.572) 0:04:38.375 ******** 2026-03-05 01:15:03.523407 | orchestrator | 2026-03-05 01:15:03.523415 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-05 01:15:03.523423 | orchestrator | Thursday 05 March 2026 01:07:59 +0000 (0:00:00.309) 0:04:38.685 ******** 2026-03-05 01:15:03.523431 | orchestrator | 2026-03-05 01:15:03.523439 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-05 01:15:03.523447 | orchestrator | Thursday 05 March 2026 01:07:59 +0000 (0:00:00.366) 0:04:39.051 ******** 2026-03-05 01:15:03.523455 | orchestrator | 2026-03-05 01:15:03.523463 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-05 01:15:03.523471 | orchestrator | Thursday 05 March 2026 01:08:00 +0000 (0:00:00.329) 0:04:39.380 ******** 2026-03-05 01:15:03.523479 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:15:03.523487 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:15:03.523496 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:15:03.523504 | orchestrator | 2026-03-05 01:15:03.523512 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-05 01:15:03.523520 | orchestrator | Thursday 05 March 2026 01:08:22 +0000 (0:00:22.506) 0:05:01.887 ******** 2026-03-05 01:15:03.523528 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:15:03.523536 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:15:03.523549 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:15:03.523557 | orchestrator | 2026-03-05 01:15:03.523565 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-05 01:15:03.523573 | orchestrator | 2026-03-05 01:15:03.523582 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-05 01:15:03.523590 | orchestrator | Thursday 05 March 2026 01:08:37 +0000 (0:00:14.357) 0:05:16.245 ******** 2026-03-05 01:15:03.523599 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:15:03.523607 | orchestrator | 2026-03-05 01:15:03.523615 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-05 01:15:03.523623 | orchestrator | Thursday 05 March 2026 01:08:39 +0000 (0:00:02.452) 0:05:18.697 ******** 2026-03-05 01:15:03.523631 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:15:03.523639 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:15:03.523647 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:15:03.523655 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.523663 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.523671 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.523679 | orchestrator | 2026-03-05 01:15:03.523687 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-05 01:15:03.523866 | orchestrator | Thursday 05 March 2026 01:08:40 +0000 (0:00:00.651) 0:05:19.349 ******** 2026-03-05 01:15:03.523901 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.523910 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.523917 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.523925 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:15:03.523934 | orchestrator | 2026-03-05 01:15:03.523942 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-05 01:15:03.523960 | orchestrator | Thursday 05 March 2026 01:08:42 +0000 (0:00:01.759) 0:05:21.109 ******** 2026-03-05 01:15:03.523969 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-05 01:15:03.523977 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-05 01:15:03.523985 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-05 01:15:03.524006 | orchestrator | 2026-03-05 01:15:03.524014 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-05 01:15:03.524032 | orchestrator | Thursday 05 March 2026 01:08:43 +0000 (0:00:01.536) 0:05:22.646 ******** 2026-03-05 01:15:03.524040 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-05 01:15:03.524048 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-05 01:15:03.524056 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-05 01:15:03.524064 | orchestrator | 2026-03-05 01:15:03.524072 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-05 01:15:03.524080 | orchestrator | Thursday 05 March 2026 01:08:46 +0000 (0:00:02.457) 0:05:25.103 ******** 2026-03-05 01:15:03.524088 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-05 01:15:03.524096 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:15:03.524104 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-05 01:15:03.524112 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:15:03.524120 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-05 01:15:03.524136 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:15:03.524144 | orchestrator | 2026-03-05 01:15:03.524153 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-05 01:15:03.524161 | orchestrator | Thursday 05 March 2026 01:08:47 +0000 (0:00:01.063) 0:05:26.167 ******** 2026-03-05 01:15:03.524168 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-05 01:15:03.524176 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-05 01:15:03.524196 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.524204 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-05 01:15:03.524212 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-05 01:15:03.524220 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-05 01:15:03.524228 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-05 01:15:03.524236 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-05 01:15:03.524244 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.524252 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-05 01:15:03.524260 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-05 01:15:03.524268 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.524276 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-05 01:15:03.524284 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-05 01:15:03.524291 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-05 01:15:03.524299 | orchestrator | 2026-03-05 01:15:03.524307 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-05 01:15:03.524315 | orchestrator | Thursday 05 March 2026 01:08:49 +0000 (0:00:02.596) 0:05:28.763 ******** 2026-03-05 01:15:03.524321 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.524328 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.524335 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.524342 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:15:03.524349 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:15:03.524355 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:15:03.524362 | orchestrator | 2026-03-05 01:15:03.524369 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-05 01:15:03.524376 | orchestrator | Thursday 05 March 2026 01:08:50 +0000 (0:00:01.244) 0:05:30.008 ******** 2026-03-05 01:15:03.524383 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.524389 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.524396 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.524403 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:15:03.524410 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:15:03.524416 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:15:03.524423 | orchestrator | 2026-03-05 01:15:03.524430 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-05 01:15:03.524437 | orchestrator | Thursday 05 March 2026 01:08:53 +0000 (0:00:02.693) 0:05:32.701 ******** 2026-03-05 01:15:03.524445 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524461 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524478 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524493 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524507 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524536 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524552 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524580 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524596 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524604 | orchestrator | 2026-03-05 01:15:03.524611 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-05 01:15:03.524618 | orchestrator | Thursday 05 March 2026 01:08:56 +0000 (0:00:03.186) 0:05:35.888 ******** 2026-03-05 01:15:03.524625 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:15:03.524634 | orchestrator | 2026-03-05 01:15:03.524641 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-05 01:15:03.524647 | orchestrator | Thursday 05 March 2026 01:08:59 +0000 (0:00:02.278) 0:05:38.167 ******** 2026-03-05 01:15:03.524655 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524662 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524675 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524688 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524735 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524762 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524781 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524792 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524821 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.524833 | orchestrator | 2026-03-05 01:15:03.524839 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-05 01:15:03.524846 | orchestrator | Thursday 05 March 2026 01:09:04 +0000 (0:00:05.491) 0:05:43.659 ******** 2026-03-05 01:15:03.524858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-05 01:15:03.524869 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-05 01:15:03.524876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.524883 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:15:03.524890 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-05 01:15:03.524897 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-05 01:15:03.524914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.524921 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:15:03.524928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-05 01:15:03.524939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.524946 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.524953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-05 01:15:03.524961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-05 01:15:03.524968 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.524983 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:15:03.524996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-05 01:15:03.525003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.525010 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.525017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-05 01:15:03.525024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.525031 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.525038 | orchestrator | 2026-03-05 01:15:03.525045 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-05 01:15:03.525052 | orchestrator | Thursday 05 March 2026 01:09:10 +0000 (0:00:05.904) 0:05:49.563 ******** 2026-03-05 01:15:03.525081 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-05 01:15:03.525094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-05 01:15:03.525107 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.525114 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:15:03.525125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-05 01:15:03.525133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-05 01:15:03.525140 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.525154 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:15:03.525161 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-05 01:15:03.525169 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-05 01:15:03.525181 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.525188 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:15:03.525199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-05 01:15:03.525206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.525213 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.525220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-05 01:15:03.525232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.525239 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.525246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-05 01:15:03.525257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.525264 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.525271 | orchestrator | 2026-03-05 01:15:03.525278 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-05 01:15:03.525285 | orchestrator | Thursday 05 March 2026 01:09:14 +0000 (0:00:03.917) 0:05:53.480 ******** 2026-03-05 01:15:03.525292 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.525298 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.525305 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.525312 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:15:03.525319 | orchestrator | 2026-03-05 01:15:03.525325 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-05 01:15:03.525332 | orchestrator | Thursday 05 March 2026 01:09:15 +0000 (0:00:01.573) 0:05:55.054 ******** 2026-03-05 01:15:03.525362 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-05 01:15:03.525369 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-05 01:15:03.525375 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-05 01:15:03.525382 | orchestrator | 2026-03-05 01:15:03.525394 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-05 01:15:03.525401 | orchestrator | Thursday 05 March 2026 01:09:18 +0000 (0:00:02.212) 0:05:57.267 ******** 2026-03-05 01:15:03.525408 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-05 01:15:03.525414 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-05 01:15:03.525421 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-05 01:15:03.525427 | orchestrator | 2026-03-05 01:15:03.525434 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-05 01:15:03.525446 | orchestrator | Thursday 05 March 2026 01:09:19 +0000 (0:00:01.718) 0:05:58.986 ******** 2026-03-05 01:15:03.525452 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:15:03.525459 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:15:03.525466 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:15:03.525472 | orchestrator | 2026-03-05 01:15:03.525479 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-05 01:15:03.525486 | orchestrator | Thursday 05 March 2026 01:09:21 +0000 (0:00:01.271) 0:06:00.257 ******** 2026-03-05 01:15:03.525492 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:15:03.525499 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:15:03.525505 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:15:03.525512 | orchestrator | 2026-03-05 01:15:03.525519 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-05 01:15:03.525525 | orchestrator | Thursday 05 March 2026 01:09:23 +0000 (0:00:01.832) 0:06:02.090 ******** 2026-03-05 01:15:03.525532 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-05 01:15:03.525539 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-05 01:15:03.525545 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-05 01:15:03.525552 | orchestrator | 2026-03-05 01:15:03.525558 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-05 01:15:03.525565 | orchestrator | Thursday 05 March 2026 01:09:25 +0000 (0:00:02.394) 0:06:04.485 ******** 2026-03-05 01:15:03.525572 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-05 01:15:03.525578 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-05 01:15:03.525585 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-05 01:15:03.525591 | orchestrator | 2026-03-05 01:15:03.525598 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-05 01:15:03.525604 | orchestrator | Thursday 05 March 2026 01:09:27 +0000 (0:00:02.228) 0:06:06.714 ******** 2026-03-05 01:15:03.525612 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-05 01:15:03.525618 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-05 01:15:03.525625 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-05 01:15:03.525631 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-05 01:15:03.525638 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-05 01:15:03.525645 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-05 01:15:03.525651 | orchestrator | 2026-03-05 01:15:03.525658 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-05 01:15:03.525665 | orchestrator | Thursday 05 March 2026 01:09:34 +0000 (0:00:07.306) 0:06:14.020 ******** 2026-03-05 01:15:03.525672 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:15:03.525678 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:15:03.525685 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:15:03.525691 | orchestrator | 2026-03-05 01:15:03.525755 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-05 01:15:03.525762 | orchestrator | Thursday 05 March 2026 01:09:35 +0000 (0:00:00.553) 0:06:14.573 ******** 2026-03-05 01:15:03.525769 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:15:03.525776 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:15:03.525782 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:15:03.525789 | orchestrator | 2026-03-05 01:15:03.525796 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-05 01:15:03.525803 | orchestrator | Thursday 05 March 2026 01:09:35 +0000 (0:00:00.422) 0:06:14.996 ******** 2026-03-05 01:15:03.525809 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:15:03.525816 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:15:03.525822 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:15:03.525829 | orchestrator | 2026-03-05 01:15:03.525835 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-05 01:15:03.525842 | orchestrator | Thursday 05 March 2026 01:09:37 +0000 (0:00:01.991) 0:06:16.988 ******** 2026-03-05 01:15:03.525979 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-05 01:15:03.525990 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-05 01:15:03.525997 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-05 01:15:03.526004 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-05 01:15:03.526011 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-05 01:15:03.526049 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-05 01:15:03.526056 | orchestrator | 2026-03-05 01:15:03.526063 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-05 01:15:03.526070 | orchestrator | Thursday 05 March 2026 01:09:43 +0000 (0:00:05.201) 0:06:22.190 ******** 2026-03-05 01:15:03.526076 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-05 01:15:03.526088 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-05 01:15:03.526095 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-05 01:15:03.526101 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-05 01:15:03.526108 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:15:03.526115 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-05 01:15:03.526121 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:15:03.526127 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-05 01:15:03.526134 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:15:03.526140 | orchestrator | 2026-03-05 01:15:03.526147 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-03-05 01:15:03.526154 | orchestrator | Thursday 05 March 2026 01:09:47 +0000 (0:00:04.123) 0:06:26.313 ******** 2026-03-05 01:15:03.526160 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.526167 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.526173 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.526180 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:15:03.526187 | orchestrator | 2026-03-05 01:15:03.526193 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-03-05 01:15:03.526200 | orchestrator | Thursday 05 March 2026 01:09:49 +0000 (0:00:02.617) 0:06:28.931 ******** 2026-03-05 01:15:03.526207 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-05 01:15:03.526213 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-05 01:15:03.526220 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-05 01:15:03.526226 | orchestrator | 2026-03-05 01:15:03.526233 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-03-05 01:15:03.526239 | orchestrator | Thursday 05 March 2026 01:09:51 +0000 (0:00:02.042) 0:06:30.974 ******** 2026-03-05 01:15:03.526246 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:15:03.526253 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:15:03.526259 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:15:03.526266 | orchestrator | 2026-03-05 01:15:03.526272 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-05 01:15:03.526279 | orchestrator | Thursday 05 March 2026 01:09:52 +0000 (0:00:00.373) 0:06:31.347 ******** 2026-03-05 01:15:03.526325 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:15:03.526333 | orchestrator | 2026-03-05 01:15:03.526340 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-05 01:15:03.526346 | orchestrator | Thursday 05 March 2026 01:09:52 +0000 (0:00:00.205) 0:06:31.552 ******** 2026-03-05 01:15:03.526363 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:15:03.526370 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:15:03.526377 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:15:03.526383 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.526390 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.526396 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.526403 | orchestrator | 2026-03-05 01:15:03.526409 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-05 01:15:03.526416 | orchestrator | Thursday 05 March 2026 01:09:53 +0000 (0:00:00.568) 0:06:32.121 ******** 2026-03-05 01:15:03.526423 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-05 01:15:03.526429 | orchestrator | 2026-03-05 01:15:03.526436 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-05 01:15:03.526442 | orchestrator | Thursday 05 March 2026 01:09:53 +0000 (0:00:00.846) 0:06:32.968 ******** 2026-03-05 01:15:03.526449 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:15:03.526455 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:15:03.526462 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:15:03.526469 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.526475 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.526482 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.526489 | orchestrator | 2026-03-05 01:15:03.526495 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-05 01:15:03.526502 | orchestrator | Thursday 05 March 2026 01:09:54 +0000 (0:00:00.547) 0:06:33.515 ******** 2026-03-05 01:15:03.526516 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-05 01:15:03.526529 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-05 01:15:03.526536 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-05 01:15:03.526548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:15:03.526556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:15:03.526564 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-05 01:15:03.526576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:15:03.526587 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-05 01:15:03.526595 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-05 01:15:03.526602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.526614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.526621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.526632 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.526640 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.526651 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.526664 | orchestrator | 2026-03-05 01:15:03.526670 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-05 01:15:03.526677 | orchestrator | Thursday 05 March 2026 01:09:58 +0000 (0:00:04.047) 0:06:37.563 ******** 2026-03-05 01:15:03.526684 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-05 01:15:03.526692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-05 01:15:03.526721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-05 01:15:03.526732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-05 01:15:03.526744 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-05 01:15:03.526759 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-05 01:15:03.526767 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.526774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:15:03.526785 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.526792 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.526803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:15:03.526815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:15:03.526823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.526829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.526836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.526843 | orchestrator | 2026-03-05 01:15:03.526850 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-05 01:15:03.526857 | orchestrator | Thursday 05 March 2026 01:10:07 +0000 (0:00:08.774) 0:06:46.337 ******** 2026-03-05 01:15:03.526863 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:15:03.526870 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:15:03.526877 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:15:03.526883 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.526893 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.526900 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.526906 | orchestrator | 2026-03-05 01:15:03.526913 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-05 01:15:03.526920 | orchestrator | Thursday 05 March 2026 01:10:09 +0000 (0:00:02.479) 0:06:48.817 ******** 2026-03-05 01:15:03.526927 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-05 01:15:03.526933 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-05 01:15:03.526940 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-05 01:15:03.526947 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-05 01:15:03.526958 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-05 01:15:03.526964 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-05 01:15:03.526971 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.526978 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.526984 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-05 01:15:03.526994 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.527001 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-05 01:15:03.527008 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-05 01:15:03.527014 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-05 01:15:03.527020 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-05 01:15:03.527027 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-05 01:15:03.527034 | orchestrator | 2026-03-05 01:15:03.527040 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-05 01:15:03.527047 | orchestrator | Thursday 05 March 2026 01:10:13 +0000 (0:00:04.047) 0:06:52.864 ******** 2026-03-05 01:15:03.527054 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:15:03.527060 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:15:03.527067 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:15:03.527073 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.527080 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.527087 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.527093 | orchestrator | 2026-03-05 01:15:03.527100 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-05 01:15:03.527107 | orchestrator | Thursday 05 March 2026 01:10:14 +0000 (0:00:00.680) 0:06:53.545 ******** 2026-03-05 01:15:03.527113 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-05 01:15:03.527120 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-05 01:15:03.527126 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-05 01:15:03.527133 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-05 01:15:03.527140 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-05 01:15:03.527147 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-05 01:15:03.527153 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-05 01:15:03.527160 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-05 01:15:03.527167 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-05 01:15:03.527173 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-05 01:15:03.527179 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-05 01:15:03.527186 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.527193 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-05 01:15:03.527200 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-05 01:15:03.527211 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-05 01:15:03.527218 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.527225 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-05 01:15:03.527232 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.527238 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-05 01:15:03.527248 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-05 01:15:03.527255 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-05 01:15:03.527262 | orchestrator | 2026-03-05 01:15:03.527269 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-05 01:15:03.527276 | orchestrator | Thursday 05 March 2026 01:10:22 +0000 (0:00:08.501) 0:07:02.047 ******** 2026-03-05 01:15:03.527282 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-05 01:15:03.527289 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-05 01:15:03.527295 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-05 01:15:03.527302 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-05 01:15:03.527308 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-05 01:15:03.527315 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-05 01:15:03.527326 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-05 01:15:03.527332 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-05 01:15:03.527339 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-05 01:15:03.527345 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-05 01:15:03.527352 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-05 01:15:03.527359 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-05 01:15:03.527365 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-05 01:15:03.527396 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.527405 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-05 01:15:03.527412 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.527418 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-05 01:15:03.527425 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-05 01:15:03.527432 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-05 01:15:03.527439 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-05 01:15:03.527445 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.527452 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-05 01:15:03.527459 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-05 01:15:03.527466 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-05 01:15:03.527472 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-05 01:15:03.527479 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-05 01:15:03.527491 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-05 01:15:03.527498 | orchestrator | 2026-03-05 01:15:03.527505 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-05 01:15:03.527512 | orchestrator | Thursday 05 March 2026 01:10:30 +0000 (0:00:07.698) 0:07:09.746 ******** 2026-03-05 01:15:03.527518 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:15:03.527525 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:15:03.527531 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:15:03.527538 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.527545 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.527551 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.527558 | orchestrator | 2026-03-05 01:15:03.527564 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-05 01:15:03.527571 | orchestrator | Thursday 05 March 2026 01:10:31 +0000 (0:00:00.969) 0:07:10.716 ******** 2026-03-05 01:15:03.527578 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:15:03.527584 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:15:03.527591 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:15:03.527597 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.527604 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.527610 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.527617 | orchestrator | 2026-03-05 01:15:03.527623 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-05 01:15:03.527630 | orchestrator | Thursday 05 March 2026 01:10:32 +0000 (0:00:00.675) 0:07:11.391 ******** 2026-03-05 01:15:03.527637 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.527643 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.527650 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:15:03.527657 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:15:03.527663 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.527670 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:15:03.527677 | orchestrator | 2026-03-05 01:15:03.527683 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-05 01:15:03.527690 | orchestrator | Thursday 05 March 2026 01:10:35 +0000 (0:00:02.830) 0:07:14.221 ******** 2026-03-05 01:15:03.527731 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-05 01:15:03.527744 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-05 01:15:03.527751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.527763 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:15:03.527770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-05 01:15:03.527777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-05 01:15:03.527789 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.527796 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:15:03.527803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-05 01:15:03.527814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.527826 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.527833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-05 01:15:03.527840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-05 01:15:03.527847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.527854 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:15:03.527868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-05 01:15:03.527875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.527882 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.527893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-05 01:15:03.527905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:15:03.527912 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.527919 | orchestrator | 2026-03-05 01:15:03.527926 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-05 01:15:03.527933 | orchestrator | Thursday 05 March 2026 01:10:37 +0000 (0:00:02.777) 0:07:16.999 ******** 2026-03-05 01:15:03.527940 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-05 01:15:03.527946 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-05 01:15:03.527953 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:15:03.527960 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-05 01:15:03.527966 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-05 01:15:03.527973 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:15:03.527980 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-05 01:15:03.527986 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-05 01:15:03.527993 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:15:03.528000 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-05 01:15:03.528006 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-05 01:15:03.528013 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.528019 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-05 01:15:03.528026 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-05 01:15:03.528033 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.528039 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-05 01:15:03.528046 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-05 01:15:03.528053 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.528059 | orchestrator | 2026-03-05 01:15:03.528066 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-05 01:15:03.528073 | orchestrator | Thursday 05 March 2026 01:10:38 +0000 (0:00:01.031) 0:07:18.030 ******** 2026-03-05 01:15:03.528084 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-05 01:15:03.528092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:15:03.528112 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-05 01:15:03.528119 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-05 01:15:03.528126 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-05 01:15:03.528133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:15:03.528145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:15:03.528157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.528167 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-05 01:15:03.528174 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-05 01:15:03.528181 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.528188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.528195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.528206 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.528221 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:15:03.528229 | orchestrator | 2026-03-05 01:15:03.528236 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-05 01:15:03.528242 | orchestrator | Thursday 05 March 2026 01:10:42 +0000 (0:00:03.299) 0:07:21.330 ******** 2026-03-05 01:15:03.528249 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:15:03.528256 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:15:03.528263 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:15:03.528269 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.528276 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.528282 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.528289 | orchestrator | 2026-03-05 01:15:03.528295 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-05 01:15:03.528302 | orchestrator | Thursday 05 March 2026 01:10:43 +0000 (0:00:00.771) 0:07:22.101 ******** 2026-03-05 01:15:03.528309 | orchestrator | 2026-03-05 01:15:03.528315 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-05 01:15:03.528322 | orchestrator | Thursday 05 March 2026 01:10:43 +0000 (0:00:00.144) 0:07:22.246 ******** 2026-03-05 01:15:03.528329 | orchestrator | 2026-03-05 01:15:03.528335 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-05 01:15:03.528342 | orchestrator | Thursday 05 March 2026 01:10:43 +0000 (0:00:00.131) 0:07:22.378 ******** 2026-03-05 01:15:03.528349 | orchestrator | 2026-03-05 01:15:03.528356 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-05 01:15:03.528363 | orchestrator | Thursday 05 March 2026 01:10:43 +0000 (0:00:00.204) 0:07:22.583 ******** 2026-03-05 01:15:03.528369 | orchestrator | 2026-03-05 01:15:03.528376 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-05 01:15:03.528383 | orchestrator | Thursday 05 March 2026 01:10:43 +0000 (0:00:00.375) 0:07:22.958 ******** 2026-03-05 01:15:03.528389 | orchestrator | 2026-03-05 01:15:03.528396 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-05 01:15:03.528403 | orchestrator | Thursday 05 March 2026 01:10:44 +0000 (0:00:00.219) 0:07:23.177 ******** 2026-03-05 01:15:03.528409 | orchestrator | 2026-03-05 01:15:03.528416 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-05 01:15:03.528422 | orchestrator | Thursday 05 March 2026 01:10:44 +0000 (0:00:00.192) 0:07:23.370 ******** 2026-03-05 01:15:03.528429 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:15:03.528436 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:15:03.528442 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:15:03.528454 | orchestrator | 2026-03-05 01:15:03.528460 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-05 01:15:03.528467 | orchestrator | Thursday 05 March 2026 01:10:57 +0000 (0:00:13.310) 0:07:36.681 ******** 2026-03-05 01:15:03.528473 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:15:03.528480 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:15:03.528487 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:15:03.528493 | orchestrator | 2026-03-05 01:15:03.528500 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-05 01:15:03.528507 | orchestrator | Thursday 05 March 2026 01:11:17 +0000 (0:00:19.651) 0:07:56.332 ******** 2026-03-05 01:15:03.528514 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:15:03.528520 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:15:03.528527 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:15:03.528533 | orchestrator | 2026-03-05 01:15:03.528540 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-05 01:15:03.528547 | orchestrator | Thursday 05 March 2026 01:12:22 +0000 (0:01:05.185) 0:09:01.517 ******** 2026-03-05 01:15:03.528553 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:15:03.528560 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:15:03.528567 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:15:03.528573 | orchestrator | 2026-03-05 01:15:03.528580 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-05 01:15:03.528586 | orchestrator | Thursday 05 March 2026 01:13:03 +0000 (0:00:40.943) 0:09:42.461 ******** 2026-03-05 01:15:03.528597 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-03-05 01:15:03.528604 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-03-05 01:15:03.528610 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-03-05 01:15:03.528617 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:15:03.528624 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:15:03.528631 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:15:03.528637 | orchestrator | 2026-03-05 01:15:03.528644 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-05 01:15:03.528650 | orchestrator | Thursday 05 March 2026 01:13:09 +0000 (0:00:06.308) 0:09:48.770 ******** 2026-03-05 01:15:03.528657 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:15:03.528663 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:15:03.528670 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:15:03.528676 | orchestrator | 2026-03-05 01:15:03.528683 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-05 01:15:03.528690 | orchestrator | Thursday 05 March 2026 01:13:10 +0000 (0:00:00.763) 0:09:49.534 ******** 2026-03-05 01:15:03.528718 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:15:03.528726 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:15:03.528732 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:15:03.528739 | orchestrator | 2026-03-05 01:15:03.528749 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-05 01:15:03.528756 | orchestrator | Thursday 05 March 2026 01:13:40 +0000 (0:00:29.591) 0:10:19.125 ******** 2026-03-05 01:15:03.528763 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:15:03.528770 | orchestrator | 2026-03-05 01:15:03.528776 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-05 01:15:03.528783 | orchestrator | Thursday 05 March 2026 01:13:40 +0000 (0:00:00.138) 0:10:19.264 ******** 2026-03-05 01:15:03.528790 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:15:03.528796 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:15:03.528803 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.528809 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.528816 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.528822 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-05 01:15:03.528833 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-05 01:15:03.528840 | orchestrator | 2026-03-05 01:15:03.528847 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-05 01:15:03.528854 | orchestrator | Thursday 05 March 2026 01:14:06 +0000 (0:00:25.932) 0:10:45.196 ******** 2026-03-05 01:15:03.528860 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.528867 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:15:03.528873 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:15:03.528880 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.528886 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:15:03.528893 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.528899 | orchestrator | 2026-03-05 01:15:03.528906 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-05 01:15:03.528912 | orchestrator | Thursday 05 March 2026 01:14:18 +0000 (0:00:12.461) 0:10:57.658 ******** 2026-03-05 01:15:03.528919 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.528926 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.528932 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:15:03.528939 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:15:03.528945 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.528952 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-03-05 01:15:03.528959 | orchestrator | 2026-03-05 01:15:03.528965 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-05 01:15:03.528972 | orchestrator | Thursday 05 March 2026 01:14:23 +0000 (0:00:04.601) 0:11:02.260 ******** 2026-03-05 01:15:03.528979 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-05 01:15:03.528985 | orchestrator | 2026-03-05 01:15:03.528992 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-05 01:15:03.528998 | orchestrator | Thursday 05 March 2026 01:14:37 +0000 (0:00:14.679) 0:11:16.939 ******** 2026-03-05 01:15:03.529005 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-05 01:15:03.529012 | orchestrator | 2026-03-05 01:15:03.529018 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-05 01:15:03.529025 | orchestrator | Thursday 05 March 2026 01:14:39 +0000 (0:00:01.682) 0:11:18.622 ******** 2026-03-05 01:15:03.529032 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:15:03.529038 | orchestrator | 2026-03-05 01:15:03.529045 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-05 01:15:03.529052 | orchestrator | Thursday 05 March 2026 01:14:41 +0000 (0:00:01.806) 0:11:20.429 ******** 2026-03-05 01:15:03.529058 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-05 01:15:03.529065 | orchestrator | 2026-03-05 01:15:03.529072 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-05 01:15:03.529078 | orchestrator | Thursday 05 March 2026 01:14:54 +0000 (0:00:13.046) 0:11:33.476 ******** 2026-03-05 01:15:03.529085 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:15:03.529091 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:15:03.529098 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:15:03.529105 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:15:03.529111 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:15:03.529118 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:15:03.529125 | orchestrator | 2026-03-05 01:15:03.529131 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-05 01:15:03.529138 | orchestrator | 2026-03-05 01:15:03.529145 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-05 01:15:03.529151 | orchestrator | Thursday 05 March 2026 01:14:56 +0000 (0:00:02.075) 0:11:35.551 ******** 2026-03-05 01:15:03.529158 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:15:03.529169 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:15:03.529176 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:15:03.529187 | orchestrator | 2026-03-05 01:15:03.529194 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-05 01:15:03.529200 | orchestrator | 2026-03-05 01:15:03.529207 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-05 01:15:03.529214 | orchestrator | Thursday 05 March 2026 01:14:57 +0000 (0:00:01.332) 0:11:36.884 ******** 2026-03-05 01:15:03.529220 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.529227 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.529234 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.529240 | orchestrator | 2026-03-05 01:15:03.529247 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-05 01:15:03.529253 | orchestrator | 2026-03-05 01:15:03.529260 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-05 01:15:03.529267 | orchestrator | Thursday 05 March 2026 01:14:58 +0000 (0:00:00.570) 0:11:37.454 ******** 2026-03-05 01:15:03.529273 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-05 01:15:03.529280 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-05 01:15:03.529287 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-05 01:15:03.529293 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-05 01:15:03.529303 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-05 01:15:03.529310 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-05 01:15:03.529317 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:15:03.529323 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-05 01:15:03.529330 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-05 01:15:03.529337 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-05 01:15:03.529343 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-05 01:15:03.529350 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-05 01:15:03.529356 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-05 01:15:03.529363 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:15:03.529369 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-05 01:15:03.529376 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-05 01:15:03.529383 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-05 01:15:03.529389 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-05 01:15:03.529396 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-05 01:15:03.529402 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-05 01:15:03.529409 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:15:03.529415 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-05 01:15:03.529422 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-05 01:15:03.529429 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-05 01:15:03.529435 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-05 01:15:03.529442 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-05 01:15:03.529448 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-05 01:15:03.529455 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.529461 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-05 01:15:03.529468 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-05 01:15:03.529474 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-05 01:15:03.529481 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-05 01:15:03.529488 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-05 01:15:03.529494 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-05 01:15:03.529505 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.529512 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-05 01:15:03.529519 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-05 01:15:03.529526 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-05 01:15:03.529532 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-05 01:15:03.529539 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-05 01:15:03.529545 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-05 01:15:03.529552 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.529558 | orchestrator | 2026-03-05 01:15:03.529565 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-05 01:15:03.529572 | orchestrator | 2026-03-05 01:15:03.529578 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-05 01:15:03.529585 | orchestrator | Thursday 05 March 2026 01:14:59 +0000 (0:00:01.543) 0:11:38.998 ******** 2026-03-05 01:15:03.529592 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-05 01:15:03.529599 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-05 01:15:03.529605 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.529612 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-05 01:15:03.529618 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-05 01:15:03.529625 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.529632 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-05 01:15:03.529638 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-05 01:15:03.529645 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.529652 | orchestrator | 2026-03-05 01:15:03.529658 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-05 01:15:03.529665 | orchestrator | 2026-03-05 01:15:03.529675 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-05 01:15:03.529682 | orchestrator | Thursday 05 March 2026 01:15:00 +0000 (0:00:00.911) 0:11:39.909 ******** 2026-03-05 01:15:03.529689 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.529715 | orchestrator | 2026-03-05 01:15:03.529725 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-05 01:15:03.529735 | orchestrator | 2026-03-05 01:15:03.529745 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-05 01:15:03.529756 | orchestrator | Thursday 05 March 2026 01:15:01 +0000 (0:00:00.749) 0:11:40.658 ******** 2026-03-05 01:15:03.529768 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:15:03.529779 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:15:03.529791 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:15:03.529802 | orchestrator | 2026-03-05 01:15:03.529813 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:15:03.529821 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:15:03.529829 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2026-03-05 01:15:03.529841 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=52  rescued=0 ignored=0 2026-03-05 01:15:03.529848 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=52  rescued=0 ignored=0 2026-03-05 01:15:03.529855 | orchestrator | testbed-node-3 : ok=40  changed=27  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-05 01:15:03.529862 | orchestrator | testbed-node-4 : ok=39  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-05 01:15:03.529874 | orchestrator | testbed-node-5 : ok=44  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-05 01:15:03.529881 | orchestrator | 2026-03-05 01:15:03.529887 | orchestrator | 2026-03-05 01:15:03.529894 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:15:03.529901 | orchestrator | Thursday 05 March 2026 01:15:02 +0000 (0:00:00.823) 0:11:41.482 ******** 2026-03-05 01:15:03.529908 | orchestrator | =============================================================================== 2026-03-05 01:15:03.529919 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 65.19s 2026-03-05 01:15:03.529926 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 40.94s 2026-03-05 01:15:03.529932 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 33.06s 2026-03-05 01:15:03.529939 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 29.59s 2026-03-05 01:15:03.529946 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 25.93s 2026-03-05 01:15:03.529952 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 24.42s 2026-03-05 01:15:03.529959 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 22.51s 2026-03-05 01:15:03.529966 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 22.51s 2026-03-05 01:15:03.529972 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 19.65s 2026-03-05 01:15:03.529979 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 18.11s 2026-03-05 01:15:03.529985 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.68s 2026-03-05 01:15:03.529992 | orchestrator | nova-cell : Create cell ------------------------------------------------ 14.37s 2026-03-05 01:15:03.529998 | orchestrator | nova : Restart nova-api container -------------------------------------- 14.36s 2026-03-05 01:15:03.530005 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.23s 2026-03-05 01:15:03.530012 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.09s 2026-03-05 01:15:03.530046 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 13.46s 2026-03-05 01:15:03.530053 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 13.31s 2026-03-05 01:15:03.530060 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.05s 2026-03-05 01:15:03.530066 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 12.46s 2026-03-05 01:15:03.530073 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------ 10.85s 2026-03-05 01:15:03.530079 | orchestrator | 2026-03-05 01:15:03 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:15:03.530086 | orchestrator | 2026-03-05 01:15:03 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:15:06.565159 | orchestrator | 2026-03-05 01:15:06 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:15:06.565249 | orchestrator | 2026-03-05 01:15:06 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:15:09.597167 | orchestrator | 2026-03-05 01:15:09 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:15:09.597239 | orchestrator | 2026-03-05 01:15:09 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:15:12.634011 | orchestrator | 2026-03-05 01:15:12 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:15:12.634138 | orchestrator | 2026-03-05 01:15:12 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:15:15.667062 | orchestrator | 2026-03-05 01:15:15 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:15:15.667133 | orchestrator | 2026-03-05 01:15:15 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:15:18.705789 | orchestrator | 2026-03-05 01:15:18 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:15:18.705884 | orchestrator | 2026-03-05 01:15:18 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:15:21.736013 | orchestrator | 2026-03-05 01:15:21 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:15:21.736143 | orchestrator | 2026-03-05 01:15:21 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:15:24.772317 | orchestrator | 2026-03-05 01:15:24 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:15:24.773190 | orchestrator | 2026-03-05 01:15:24 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:15:27.799607 | orchestrator | 2026-03-05 01:15:27 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:15:27.799724 | orchestrator | 2026-03-05 01:15:27 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:15:30.833329 | orchestrator | 2026-03-05 01:15:30 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:15:30.833455 | orchestrator | 2026-03-05 01:15:30 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:15:33.863380 | orchestrator | 2026-03-05 01:15:33 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:15:33.863430 | orchestrator | 2026-03-05 01:15:33 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:15:36.892863 | orchestrator | 2026-03-05 01:15:36 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:15:36.892970 | orchestrator | 2026-03-05 01:15:36 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:15:39.921038 | orchestrator | 2026-03-05 01:15:39 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:15:39.921118 | orchestrator | 2026-03-05 01:15:39 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:15:42.949402 | orchestrator | 2026-03-05 01:15:42 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:15:42.949455 | orchestrator | 2026-03-05 01:15:42 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:15:45.988467 | orchestrator | 2026-03-05 01:15:45 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:15:45.988563 | orchestrator | 2026-03-05 01:15:45 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:15:49.034306 | orchestrator | 2026-03-05 01:15:49 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:15:49.034426 | orchestrator | 2026-03-05 01:15:49 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:15:52.123516 | orchestrator | 2026-03-05 01:15:52 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:15:52.123576 | orchestrator | 2026-03-05 01:15:52 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:15:55.174144 | orchestrator | 2026-03-05 01:15:55 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:15:55.174240 | orchestrator | 2026-03-05 01:15:55 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:15:58.216537 | orchestrator | 2026-03-05 01:15:58 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:15:58.216622 | orchestrator | 2026-03-05 01:15:58 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:01.266572 | orchestrator | 2026-03-05 01:16:01 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:16:01.266772 | orchestrator | 2026-03-05 01:16:01 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:04.305999 | orchestrator | 2026-03-05 01:16:04 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:16:04.306160 | orchestrator | 2026-03-05 01:16:04 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:07.350329 | orchestrator | 2026-03-05 01:16:07 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:16:07.350427 | orchestrator | 2026-03-05 01:16:07 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:10.379324 | orchestrator | 2026-03-05 01:16:10 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:16:10.379498 | orchestrator | 2026-03-05 01:16:10 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:13.411476 | orchestrator | 2026-03-05 01:16:13 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:16:13.411560 | orchestrator | 2026-03-05 01:16:13 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:16.450322 | orchestrator | 2026-03-05 01:16:16 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:16:16.451196 | orchestrator | 2026-03-05 01:16:16 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:19.509347 | orchestrator | 2026-03-05 01:16:19 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:16:19.509450 | orchestrator | 2026-03-05 01:16:19 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:22.550966 | orchestrator | 2026-03-05 01:16:22 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:16:22.551055 | orchestrator | 2026-03-05 01:16:22 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:25.583383 | orchestrator | 2026-03-05 01:16:25 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:16:25.583469 | orchestrator | 2026-03-05 01:16:25 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:28.618079 | orchestrator | 2026-03-05 01:16:28 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:16:28.618154 | orchestrator | 2026-03-05 01:16:28 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:31.654972 | orchestrator | 2026-03-05 01:16:31 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:16:31.655043 | orchestrator | 2026-03-05 01:16:31 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:34.689002 | orchestrator | 2026-03-05 01:16:34 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:16:34.689107 | orchestrator | 2026-03-05 01:16:34 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:37.732585 | orchestrator | 2026-03-05 01:16:37 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:16:37.732736 | orchestrator | 2026-03-05 01:16:37 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:40.779319 | orchestrator | 2026-03-05 01:16:40 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:16:40.779390 | orchestrator | 2026-03-05 01:16:40 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:43.820417 | orchestrator | 2026-03-05 01:16:43 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:16:43.820528 | orchestrator | 2026-03-05 01:16:43 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:46.857713 | orchestrator | 2026-03-05 01:16:46 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:16:46.857842 | orchestrator | 2026-03-05 01:16:46 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:49.898211 | orchestrator | 2026-03-05 01:16:49 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:16:49.898301 | orchestrator | 2026-03-05 01:16:49 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:52.938677 | orchestrator | 2026-03-05 01:16:52 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:16:52.938758 | orchestrator | 2026-03-05 01:16:52 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:55.976614 | orchestrator | 2026-03-05 01:16:55 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:16:55.977117 | orchestrator | 2026-03-05 01:16:55 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:59.010655 | orchestrator | 2026-03-05 01:16:59 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:16:59.010742 | orchestrator | 2026-03-05 01:16:59 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:02.072473 | orchestrator | 2026-03-05 01:17:02 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:17:02.072567 | orchestrator | 2026-03-05 01:17:02 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:05.107320 | orchestrator | 2026-03-05 01:17:05 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:17:05.107411 | orchestrator | 2026-03-05 01:17:05 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:08.142703 | orchestrator | 2026-03-05 01:17:08 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:17:08.142799 | orchestrator | 2026-03-05 01:17:08 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:11.177814 | orchestrator | 2026-03-05 01:17:11 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:17:11.177913 | orchestrator | 2026-03-05 01:17:11 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:14.233339 | orchestrator | 2026-03-05 01:17:14 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:17:14.233429 | orchestrator | 2026-03-05 01:17:14 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:17.272369 | orchestrator | 2026-03-05 01:17:17 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:17:17.272457 | orchestrator | 2026-03-05 01:17:17 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:20.312550 | orchestrator | 2026-03-05 01:17:20 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:17:20.312679 | orchestrator | 2026-03-05 01:17:20 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:23.347882 | orchestrator | 2026-03-05 01:17:23 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:17:23.347954 | orchestrator | 2026-03-05 01:17:23 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:26.386661 | orchestrator | 2026-03-05 01:17:26 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:17:26.386747 | orchestrator | 2026-03-05 01:17:26 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:29.420080 | orchestrator | 2026-03-05 01:17:29 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:17:29.420170 | orchestrator | 2026-03-05 01:17:29 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:32.461427 | orchestrator | 2026-03-05 01:17:32 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:17:32.461667 | orchestrator | 2026-03-05 01:17:32 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:35.502258 | orchestrator | 2026-03-05 01:17:35 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:17:35.502349 | orchestrator | 2026-03-05 01:17:35 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:38.533118 | orchestrator | 2026-03-05 01:17:38 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:17:38.533217 | orchestrator | 2026-03-05 01:17:38 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:41.572923 | orchestrator | 2026-03-05 01:17:41 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:17:41.572999 | orchestrator | 2026-03-05 01:17:41 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:44.610111 | orchestrator | 2026-03-05 01:17:44 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:17:44.610191 | orchestrator | 2026-03-05 01:17:44 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:47.654804 | orchestrator | 2026-03-05 01:17:47 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:17:47.654884 | orchestrator | 2026-03-05 01:17:47 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:50.688827 | orchestrator | 2026-03-05 01:17:50 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state STARTED 2026-03-05 01:17:50.688909 | orchestrator | 2026-03-05 01:17:50 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:53.728926 | orchestrator | 2026-03-05 01:17:53 | INFO  | Task 0021797a-9fe1-46b8-bd3f-84c82d3a40b5 is in state SUCCESS 2026-03-05 01:17:53.731078 | orchestrator | 2026-03-05 01:17:53.731129 | orchestrator | 2026-03-05 01:17:53.731135 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:17:53.731140 | orchestrator | 2026-03-05 01:17:53.731144 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:17:53.731148 | orchestrator | Thursday 05 March 2026 01:12:40 +0000 (0:00:00.675) 0:00:00.675 ******** 2026-03-05 01:17:53.731152 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:17:53.731157 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:17:53.731161 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:17:53.731165 | orchestrator | 2026-03-05 01:17:53.731169 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:17:53.731173 | orchestrator | Thursday 05 March 2026 01:12:40 +0000 (0:00:00.407) 0:00:01.082 ******** 2026-03-05 01:17:53.731177 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-05 01:17:53.731181 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-05 01:17:53.731185 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-05 01:17:53.731188 | orchestrator | 2026-03-05 01:17:53.731192 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-05 01:17:53.731196 | orchestrator | 2026-03-05 01:17:53.731226 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-05 01:17:53.731231 | orchestrator | Thursday 05 March 2026 01:12:41 +0000 (0:00:00.585) 0:00:01.667 ******** 2026-03-05 01:17:53.731235 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:17:53.731241 | orchestrator | 2026-03-05 01:17:53.731245 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-05 01:17:53.731248 | orchestrator | Thursday 05 March 2026 01:12:41 +0000 (0:00:00.631) 0:00:02.299 ******** 2026-03-05 01:17:53.731253 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-05 01:17:53.731257 | orchestrator | 2026-03-05 01:17:53.731260 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-05 01:17:53.731280 | orchestrator | Thursday 05 March 2026 01:12:45 +0000 (0:00:03.714) 0:00:06.014 ******** 2026-03-05 01:17:53.731284 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-05 01:17:53.731320 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-05 01:17:53.731360 | orchestrator | 2026-03-05 01:17:53.731366 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-05 01:17:53.731372 | orchestrator | Thursday 05 March 2026 01:12:53 +0000 (0:00:07.397) 0:00:13.411 ******** 2026-03-05 01:17:53.731378 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-05 01:17:53.731383 | orchestrator | 2026-03-05 01:17:53.731387 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-05 01:17:53.731391 | orchestrator | Thursday 05 March 2026 01:12:56 +0000 (0:00:03.568) 0:00:16.979 ******** 2026-03-05 01:17:53.731395 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-05 01:17:53.731399 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-05 01:17:53.731402 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-05 01:17:53.731406 | orchestrator | 2026-03-05 01:17:53.731410 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-05 01:17:53.731414 | orchestrator | Thursday 05 March 2026 01:13:05 +0000 (0:00:09.047) 0:00:26.027 ******** 2026-03-05 01:17:53.731418 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-05 01:17:53.731422 | orchestrator | 2026-03-05 01:17:53.731425 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-05 01:17:53.731429 | orchestrator | Thursday 05 March 2026 01:13:09 +0000 (0:00:03.563) 0:00:29.590 ******** 2026-03-05 01:17:53.731433 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-05 01:17:53.731437 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-05 01:17:53.731440 | orchestrator | 2026-03-05 01:17:53.731444 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-05 01:17:53.731448 | orchestrator | Thursday 05 March 2026 01:13:17 +0000 (0:00:08.170) 0:00:37.761 ******** 2026-03-05 01:17:53.731452 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-05 01:17:53.731455 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-05 01:17:53.731459 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-05 01:17:53.731463 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-05 01:17:53.731466 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-05 01:17:53.731470 | orchestrator | 2026-03-05 01:17:53.731474 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-05 01:17:53.731478 | orchestrator | Thursday 05 March 2026 01:13:34 +0000 (0:00:17.436) 0:00:55.197 ******** 2026-03-05 01:17:53.731481 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:17:53.731485 | orchestrator | 2026-03-05 01:17:53.731489 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-05 01:17:53.731493 | orchestrator | Thursday 05 March 2026 01:13:35 +0000 (0:00:00.678) 0:00:55.876 ******** 2026-03-05 01:17:53.731496 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:17:53.731500 | orchestrator | 2026-03-05 01:17:53.731504 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-05 01:17:53.731508 | orchestrator | Thursday 05 March 2026 01:13:42 +0000 (0:00:06.764) 0:01:02.641 ******** 2026-03-05 01:17:53.731512 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:17:53.731516 | orchestrator | 2026-03-05 01:17:53.731520 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-05 01:17:53.731550 | orchestrator | Thursday 05 March 2026 01:13:47 +0000 (0:00:05.237) 0:01:07.878 ******** 2026-03-05 01:17:53.731555 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:17:53.731564 | orchestrator | 2026-03-05 01:17:53.731568 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-05 01:17:53.731572 | orchestrator | Thursday 05 March 2026 01:13:51 +0000 (0:00:03.606) 0:01:11.485 ******** 2026-03-05 01:17:53.731575 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-05 01:17:53.731579 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-05 01:17:53.731583 | orchestrator | 2026-03-05 01:17:53.731587 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-05 01:17:53.731590 | orchestrator | Thursday 05 March 2026 01:14:02 +0000 (0:00:11.671) 0:01:23.156 ******** 2026-03-05 01:17:53.731594 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-05 01:17:53.731598 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-05 01:17:53.731603 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-05 01:17:53.731609 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-05 01:17:53.731613 | orchestrator | 2026-03-05 01:17:53.731617 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-05 01:17:53.731620 | orchestrator | Thursday 05 March 2026 01:14:20 +0000 (0:00:17.557) 0:01:40.714 ******** 2026-03-05 01:17:53.731624 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:17:53.731628 | orchestrator | 2026-03-05 01:17:53.731632 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-05 01:17:53.731635 | orchestrator | Thursday 05 March 2026 01:14:25 +0000 (0:00:05.387) 0:01:46.102 ******** 2026-03-05 01:17:53.731639 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:17:53.731643 | orchestrator | 2026-03-05 01:17:53.731646 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-05 01:17:53.731654 | orchestrator | Thursday 05 March 2026 01:14:31 +0000 (0:00:05.833) 0:01:51.935 ******** 2026-03-05 01:17:53.731658 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:17:53.731662 | orchestrator | 2026-03-05 01:17:53.731667 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-05 01:17:53.731671 | orchestrator | Thursday 05 March 2026 01:14:31 +0000 (0:00:00.259) 0:01:52.195 ******** 2026-03-05 01:17:53.731675 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:17:53.731680 | orchestrator | 2026-03-05 01:17:53.731684 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-05 01:17:53.731688 | orchestrator | Thursday 05 March 2026 01:14:36 +0000 (0:00:04.967) 0:01:57.163 ******** 2026-03-05 01:17:53.731693 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:17:53.731697 | orchestrator | 2026-03-05 01:17:53.731702 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-05 01:17:53.731706 | orchestrator | Thursday 05 March 2026 01:14:38 +0000 (0:00:01.306) 0:01:58.469 ******** 2026-03-05 01:17:53.731710 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:17:53.731714 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:17:53.731719 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:17:53.731723 | orchestrator | 2026-03-05 01:17:53.731727 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-05 01:17:53.731732 | orchestrator | Thursday 05 March 2026 01:14:44 +0000 (0:00:05.908) 0:02:04.378 ******** 2026-03-05 01:17:53.731736 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:17:53.731740 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:17:53.731745 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:17:53.731749 | orchestrator | 2026-03-05 01:17:53.731753 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-05 01:17:53.731761 | orchestrator | Thursday 05 March 2026 01:14:48 +0000 (0:00:04.496) 0:02:08.874 ******** 2026-03-05 01:17:53.731766 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:17:53.731770 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:17:53.731775 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:17:53.731786 | orchestrator | 2026-03-05 01:17:53.731790 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-05 01:17:53.731795 | orchestrator | Thursday 05 March 2026 01:14:49 +0000 (0:00:00.858) 0:02:09.732 ******** 2026-03-05 01:17:53.731799 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:17:53.731803 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:17:53.731807 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:17:53.731811 | orchestrator | 2026-03-05 01:17:53.731814 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-05 01:17:53.731818 | orchestrator | Thursday 05 March 2026 01:14:51 +0000 (0:00:02.095) 0:02:11.828 ******** 2026-03-05 01:17:53.731822 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:17:53.731826 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:17:53.731829 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:17:53.731833 | orchestrator | 2026-03-05 01:17:53.731837 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-05 01:17:53.731841 | orchestrator | Thursday 05 March 2026 01:14:52 +0000 (0:00:01.453) 0:02:13.282 ******** 2026-03-05 01:17:53.731844 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:17:53.731848 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:17:53.731852 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:17:53.731856 | orchestrator | 2026-03-05 01:17:53.731859 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-05 01:17:53.731863 | orchestrator | Thursday 05 March 2026 01:14:54 +0000 (0:00:01.256) 0:02:14.539 ******** 2026-03-05 01:17:53.731867 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:17:53.731871 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:17:53.731874 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:17:53.731878 | orchestrator | 2026-03-05 01:17:53.731885 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-05 01:17:53.731889 | orchestrator | Thursday 05 March 2026 01:14:56 +0000 (0:00:02.231) 0:02:16.770 ******** 2026-03-05 01:17:53.731892 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:17:53.731896 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:17:53.731900 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:17:53.731904 | orchestrator | 2026-03-05 01:17:53.731908 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-05 01:17:53.731912 | orchestrator | Thursday 05 March 2026 01:14:58 +0000 (0:00:01.738) 0:02:18.508 ******** 2026-03-05 01:17:53.731915 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:17:53.731919 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:17:53.731923 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:17:53.732088 | orchestrator | 2026-03-05 01:17:53.732096 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-05 01:17:53.732099 | orchestrator | Thursday 05 March 2026 01:14:58 +0000 (0:00:00.663) 0:02:19.172 ******** 2026-03-05 01:17:53.732103 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:17:53.732107 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:17:53.732111 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:17:53.732114 | orchestrator | 2026-03-05 01:17:53.732118 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-05 01:17:53.732124 | orchestrator | Thursday 05 March 2026 01:15:03 +0000 (0:00:04.894) 0:02:24.067 ******** 2026-03-05 01:17:53.732131 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:17:53.732136 | orchestrator | 2026-03-05 01:17:53.732140 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-05 01:17:53.732143 | orchestrator | Thursday 05 March 2026 01:15:04 +0000 (0:00:00.902) 0:02:24.969 ******** 2026-03-05 01:17:53.732147 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:17:53.732155 | orchestrator | 2026-03-05 01:17:53.732158 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-05 01:17:53.732162 | orchestrator | Thursday 05 March 2026 01:15:09 +0000 (0:00:04.695) 0:02:29.665 ******** 2026-03-05 01:17:53.732166 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:17:53.732171 | orchestrator | 2026-03-05 01:17:53.732177 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-05 01:17:53.732188 | orchestrator | Thursday 05 March 2026 01:15:12 +0000 (0:00:03.611) 0:02:33.276 ******** 2026-03-05 01:17:53.732194 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-05 01:17:53.732200 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-05 01:17:53.732206 | orchestrator | 2026-03-05 01:17:53.732212 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-05 01:17:53.732222 | orchestrator | Thursday 05 March 2026 01:15:21 +0000 (0:00:08.062) 0:02:41.339 ******** 2026-03-05 01:17:53.732230 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:17:53.732235 | orchestrator | 2026-03-05 01:17:53.732241 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-05 01:17:53.732247 | orchestrator | Thursday 05 March 2026 01:15:24 +0000 (0:00:03.570) 0:02:44.909 ******** 2026-03-05 01:17:53.732253 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:17:53.732259 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:17:53.732272 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:17:53.732278 | orchestrator | 2026-03-05 01:17:53.732284 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-05 01:17:53.732290 | orchestrator | Thursday 05 March 2026 01:15:25 +0000 (0:00:00.465) 0:02:45.374 ******** 2026-03-05 01:17:53.732299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:17:53.732314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:17:53.732321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:17:53.732334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:17:53.732345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:17:53.732352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:17:53.732360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.732368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.732379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.732385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.732395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.732406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.732412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:17:53.732420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:17:53.732427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:17:53.732434 | orchestrator | 2026-03-05 01:17:53.732440 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-05 01:17:53.732445 | orchestrator | Thursday 05 March 2026 01:15:27 +0000 (0:00:02.570) 0:02:47.944 ******** 2026-03-05 01:17:53.732451 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:17:53.732457 | orchestrator | 2026-03-05 01:17:53.732466 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-05 01:17:53.732478 | orchestrator | Thursday 05 March 2026 01:15:27 +0000 (0:00:00.131) 0:02:48.075 ******** 2026-03-05 01:17:53.732485 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:17:53.732490 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:17:53.732496 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:17:53.732502 | orchestrator | 2026-03-05 01:17:53.732508 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-05 01:17:53.732514 | orchestrator | Thursday 05 March 2026 01:15:28 +0000 (0:00:00.637) 0:02:48.712 ******** 2026-03-05 01:17:53.732520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-05 01:17:53.732574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 01:17:53.732582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 01:17:53.732589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 01:17:53.732595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:17:53.732602 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:17:53.732626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-05 01:17:53.732633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 01:17:53.732730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 01:17:53.732743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 01:17:53.732750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:17:53.732757 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:17:53.732767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-05 01:17:53.732789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 01:17:53.732796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 01:17:53.732802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 01:17:53.733451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:17:53.733487 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:17:53.733495 | orchestrator | 2026-03-05 01:17:53.733502 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-05 01:17:53.733510 | orchestrator | Thursday 05 March 2026 01:15:29 +0000 (0:00:00.911) 0:02:49.624 ******** 2026-03-05 01:17:53.733517 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:17:53.733524 | orchestrator | 2026-03-05 01:17:53.733556 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-05 01:17:53.733563 | orchestrator | Thursday 05 March 2026 01:15:30 +0000 (0:00:00.712) 0:02:50.336 ******** 2026-03-05 01:17:53.733571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:17:53.733619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:17:53.733628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:17:53.733640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:17:53.733647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:17:53.733654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:17:53.733662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.733674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.733685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.733692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.733703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.733710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.733717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:17:53.733732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:17:53.733743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:17:53.733750 | orchestrator | 2026-03-05 01:17:53.733756 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-05 01:17:53.733763 | orchestrator | Thursday 05 March 2026 01:15:35 +0000 (0:00:05.461) 0:02:55.797 ******** 2026-03-05 01:17:53.733770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-05 01:17:53.733780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 01:17:53.733786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 01:17:53.733793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 01:17:53.733805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:17:53.733811 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:17:53.733823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-05 01:17:53.733830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 01:17:53.733837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 01:17:53.733847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 01:17:53.733854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:17:53.733866 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:17:53.733873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-05 01:17:53.733880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 01:17:53.733890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 01:17:53.733897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 01:17:53.733907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:17:53.733914 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:17:53.733920 | orchestrator | 2026-03-05 01:17:53.733927 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-05 01:17:53.733933 | orchestrator | Thursday 05 March 2026 01:15:36 +0000 (0:00:00.808) 0:02:56.606 ******** 2026-03-05 01:17:53.733945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-05 01:17:53.733952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 01:17:53.733959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 01:17:53.733970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 01:17:53.733977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:17:53.733983 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:17:53.733993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-05 01:17:53.734007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 01:17:53.734067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 01:17:53.734076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 01:17:53.734090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:17:53.734098 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:17:53.734106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-05 01:17:53.734117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 01:17:53.734131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 01:17:53.734137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 01:17:53.734145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:17:53.734152 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:17:53.734159 | orchestrator | 2026-03-05 01:17:53.734166 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-05 01:17:53.734173 | orchestrator | Thursday 05 March 2026 01:15:37 +0000 (0:00:00.883) 0:02:57.490 ******** 2026-03-05 01:17:53.734185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:17:53.734197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:17:53.734209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:17:53.734216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:17:53.734223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:17:53.734231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:17:53.734242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.734249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.734264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.734271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.734279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.734286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.734299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:17:53.734306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:17:53.734312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:17:53.734326 | orchestrator | 2026-03-05 01:17:53.734333 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-05 01:17:53.734340 | orchestrator | Thursday 05 March 2026 01:15:42 +0000 (0:00:04.977) 0:03:02.467 ******** 2026-03-05 01:17:53.734350 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-05 01:17:53.734359 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-05 01:17:53.734366 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-05 01:17:53.734373 | orchestrator | 2026-03-05 01:17:53.734380 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-05 01:17:53.734387 | orchestrator | Thursday 05 March 2026 01:15:44 +0000 (0:00:01.940) 0:03:04.408 ******** 2026-03-05 01:17:53.734394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:17:53.734401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:17:53.734412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:17:53.734423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:17:53.734433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:17:53.734440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:17:53.734447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.734454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.734461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.734471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.734483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.734493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.734501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:17:53.734507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:17:53.734515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:17:53.734522 | orchestrator | 2026-03-05 01:17:53.734590 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-05 01:17:53.734598 | orchestrator | Thursday 05 March 2026 01:16:02 +0000 (0:00:18.867) 0:03:23.276 ******** 2026-03-05 01:17:53.734604 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:17:53.734611 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:17:53.734617 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:17:53.734623 | orchestrator | 2026-03-05 01:17:53.734630 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-05 01:17:53.734636 | orchestrator | Thursday 05 March 2026 01:16:04 +0000 (0:00:01.646) 0:03:24.922 ******** 2026-03-05 01:17:53.734642 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-05 01:17:53.734649 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-05 01:17:53.734664 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-05 01:17:53.734670 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-05 01:17:53.734676 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-05 01:17:53.734682 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-05 01:17:53.734688 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-05 01:17:53.734694 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-05 01:17:53.734700 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-05 01:17:53.734707 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-05 01:17:53.734714 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-05 01:17:53.734720 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-05 01:17:53.734726 | orchestrator | 2026-03-05 01:17:53.734733 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-05 01:17:53.734739 | orchestrator | Thursday 05 March 2026 01:16:10 +0000 (0:00:05.630) 0:03:30.553 ******** 2026-03-05 01:17:53.734745 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-05 01:17:53.734751 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-05 01:17:53.734758 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-05 01:17:53.734764 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-05 01:17:53.734770 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-05 01:17:53.734776 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-05 01:17:53.734782 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-05 01:17:53.734788 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-05 01:17:53.734794 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-05 01:17:53.734800 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-05 01:17:53.734805 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-05 01:17:53.734815 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-05 01:17:53.734821 | orchestrator | 2026-03-05 01:17:53.734826 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-05 01:17:53.734833 | orchestrator | Thursday 05 March 2026 01:16:16 +0000 (0:00:06.139) 0:03:36.692 ******** 2026-03-05 01:17:53.734838 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-05 01:17:53.734843 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-05 01:17:53.734849 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-05 01:17:53.734854 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-05 01:17:53.734860 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-05 01:17:53.734866 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-05 01:17:53.734872 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-05 01:17:53.734878 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-05 01:17:53.734885 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-05 01:17:53.734891 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-05 01:17:53.734896 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-05 01:17:53.734902 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-05 01:17:53.734909 | orchestrator | 2026-03-05 01:17:53.734915 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-05 01:17:53.734921 | orchestrator | Thursday 05 March 2026 01:16:21 +0000 (0:00:05.283) 0:03:41.976 ******** 2026-03-05 01:17:53.734927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:17:53.734945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:17:53.734952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:17:53.734962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:17:53.734969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:17:53.734976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:17:53.734987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.734998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.735005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.735011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.735022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.735028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:17:53.735040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:17:53.735047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:17:53.735058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:17:53.735065 | orchestrator | 2026-03-05 01:17:53.735072 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-05 01:17:53.735078 | orchestrator | Thursday 05 March 2026 01:16:25 +0000 (0:00:03.822) 0:03:45.799 ******** 2026-03-05 01:17:53.735084 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:17:53.735091 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:17:53.735097 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:17:53.735103 | orchestrator | 2026-03-05 01:17:53.735110 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-05 01:17:53.735116 | orchestrator | Thursday 05 March 2026 01:16:25 +0000 (0:00:00.355) 0:03:46.154 ******** 2026-03-05 01:17:53.735122 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:17:53.735128 | orchestrator | 2026-03-05 01:17:53.735134 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-05 01:17:53.735141 | orchestrator | Thursday 05 March 2026 01:16:28 +0000 (0:00:02.307) 0:03:48.462 ******** 2026-03-05 01:17:53.735147 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:17:53.735153 | orchestrator | 2026-03-05 01:17:53.735160 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-05 01:17:53.735166 | orchestrator | Thursday 05 March 2026 01:16:30 +0000 (0:00:02.309) 0:03:50.771 ******** 2026-03-05 01:17:53.735172 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:17:53.735179 | orchestrator | 2026-03-05 01:17:53.735185 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-05 01:17:53.735191 | orchestrator | Thursday 05 March 2026 01:16:32 +0000 (0:00:02.484) 0:03:53.255 ******** 2026-03-05 01:17:53.735198 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:17:53.735204 | orchestrator | 2026-03-05 01:17:53.735210 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-05 01:17:53.735216 | orchestrator | Thursday 05 March 2026 01:16:35 +0000 (0:00:03.037) 0:03:56.293 ******** 2026-03-05 01:17:53.735223 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:17:53.735229 | orchestrator | 2026-03-05 01:17:53.735235 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-05 01:17:53.735245 | orchestrator | Thursday 05 March 2026 01:16:59 +0000 (0:00:23.453) 0:04:19.746 ******** 2026-03-05 01:17:53.735256 | orchestrator | 2026-03-05 01:17:53.735263 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-05 01:17:53.735269 | orchestrator | Thursday 05 March 2026 01:16:59 +0000 (0:00:00.072) 0:04:19.818 ******** 2026-03-05 01:17:53.735275 | orchestrator | 2026-03-05 01:17:53.735282 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-05 01:17:53.735288 | orchestrator | Thursday 05 March 2026 01:16:59 +0000 (0:00:00.066) 0:04:19.885 ******** 2026-03-05 01:17:53.735294 | orchestrator | 2026-03-05 01:17:53.735301 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-05 01:17:53.735308 | orchestrator | Thursday 05 March 2026 01:16:59 +0000 (0:00:00.067) 0:04:19.952 ******** 2026-03-05 01:17:53.735314 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:17:53.735320 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:17:53.735327 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:17:53.735333 | orchestrator | 2026-03-05 01:17:53.735339 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-05 01:17:53.735346 | orchestrator | Thursday 05 March 2026 01:17:11 +0000 (0:00:11.606) 0:04:31.559 ******** 2026-03-05 01:17:53.735352 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:17:53.735358 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:17:53.735364 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:17:53.735370 | orchestrator | 2026-03-05 01:17:53.735377 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-05 01:17:53.735384 | orchestrator | Thursday 05 March 2026 01:17:18 +0000 (0:00:06.909) 0:04:38.469 ******** 2026-03-05 01:17:53.735390 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:17:53.735396 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:17:53.735402 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:17:53.735408 | orchestrator | 2026-03-05 01:17:53.735414 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-05 01:17:53.735421 | orchestrator | Thursday 05 March 2026 01:17:29 +0000 (0:00:11.540) 0:04:50.010 ******** 2026-03-05 01:17:53.735427 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:17:53.735433 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:17:53.735440 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:17:53.735446 | orchestrator | 2026-03-05 01:17:53.735452 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-05 01:17:53.735459 | orchestrator | Thursday 05 March 2026 01:17:40 +0000 (0:00:10.828) 0:05:00.839 ******** 2026-03-05 01:17:53.735465 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:17:53.735471 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:17:53.735477 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:17:53.735484 | orchestrator | 2026-03-05 01:17:53.735490 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:17:53.735497 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 01:17:53.735504 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-05 01:17:53.735510 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-05 01:17:53.735516 | orchestrator | 2026-03-05 01:17:53.735523 | orchestrator | 2026-03-05 01:17:53.735546 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:17:53.735552 | orchestrator | Thursday 05 March 2026 01:17:52 +0000 (0:00:11.548) 0:05:12.387 ******** 2026-03-05 01:17:53.735565 | orchestrator | =============================================================================== 2026-03-05 01:17:53.735572 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 23.45s 2026-03-05 01:17:53.735578 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 18.87s 2026-03-05 01:17:53.735595 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.56s 2026-03-05 01:17:53.735602 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.44s 2026-03-05 01:17:53.735608 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.67s 2026-03-05 01:17:53.735615 | orchestrator | octavia : Restart octavia-api container -------------------------------- 11.61s 2026-03-05 01:17:53.735620 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 11.55s 2026-03-05 01:17:53.735626 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 11.54s 2026-03-05 01:17:53.735632 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.83s 2026-03-05 01:17:53.735638 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 9.05s 2026-03-05 01:17:53.735644 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.17s 2026-03-05 01:17:53.735651 | orchestrator | octavia : Get security groups for octavia ------------------------------- 8.06s 2026-03-05 01:17:53.735658 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.40s 2026-03-05 01:17:53.735664 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.91s 2026-03-05 01:17:53.735670 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 6.76s 2026-03-05 01:17:53.735677 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 6.14s 2026-03-05 01:17:53.735683 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.91s 2026-03-05 01:17:53.735689 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.83s 2026-03-05 01:17:53.735695 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.63s 2026-03-05 01:17:53.735707 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.46s 2026-03-05 01:17:53.735714 | orchestrator | 2026-03-05 01:17:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:17:56.778716 | orchestrator | 2026-03-05 01:17:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:17:59.810767 | orchestrator | 2026-03-05 01:17:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:18:02.846248 | orchestrator | 2026-03-05 01:18:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:18:05.877988 | orchestrator | 2026-03-05 01:18:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:18:08.933791 | orchestrator | 2026-03-05 01:18:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:18:11.966429 | orchestrator | 2026-03-05 01:18:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:18:15.002102 | orchestrator | 2026-03-05 01:18:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:18:18.043924 | orchestrator | 2026-03-05 01:18:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:18:21.082467 | orchestrator | 2026-03-05 01:18:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:18:24.119357 | orchestrator | 2026-03-05 01:18:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:18:27.158163 | orchestrator | 2026-03-05 01:18:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:18:30.195188 | orchestrator | 2026-03-05 01:18:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:18:33.235057 | orchestrator | 2026-03-05 01:18:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:18:36.267170 | orchestrator | 2026-03-05 01:18:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:18:39.308171 | orchestrator | 2026-03-05 01:18:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:18:42.352798 | orchestrator | 2026-03-05 01:18:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:18:45.395545 | orchestrator | 2026-03-05 01:18:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:18:48.429205 | orchestrator | 2026-03-05 01:18:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:18:51.462950 | orchestrator | 2026-03-05 01:18:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:18:54.505358 | orchestrator | 2026-03-05 01:18:55.023378 | orchestrator | 2026-03-05 01:18:55.030303 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Thu Mar 5 01:18:55 UTC 2026 2026-03-05 01:18:55.030572 | orchestrator | 2026-03-05 01:18:55.458533 | orchestrator | ok: Runtime: 0:38:53.471272 2026-03-05 01:18:55.723460 | 2026-03-05 01:18:55.723729 | TASK [Bootstrap services] 2026-03-05 01:18:56.466512 | orchestrator | 2026-03-05 01:18:56.466688 | orchestrator | # BOOTSTRAP 2026-03-05 01:18:56.466709 | orchestrator | 2026-03-05 01:18:56.466723 | orchestrator | + set -e 2026-03-05 01:18:56.466735 | orchestrator | + echo 2026-03-05 01:18:56.466749 | orchestrator | + echo '# BOOTSTRAP' 2026-03-05 01:18:56.466766 | orchestrator | + echo 2026-03-05 01:18:56.466806 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-05 01:18:56.477223 | orchestrator | + set -e 2026-03-05 01:18:56.477309 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-05 01:19:02.360851 | orchestrator | 2026-03-05 01:19:02 | INFO  | It takes a moment until task b564de81-7bc3-42d8-8a34-8b45f2ed6459 (flavor-manager) has been started and output is visible here. 2026-03-05 01:19:11.491911 | orchestrator | 2026-03-05 01:19:06 | INFO  | Flavor SCS-1L-1 created 2026-03-05 01:19:11.492155 | orchestrator | 2026-03-05 01:19:06 | INFO  | Flavor SCS-1L-1-5 created 2026-03-05 01:19:11.492196 | orchestrator | 2026-03-05 01:19:06 | INFO  | Flavor SCS-1V-2 created 2026-03-05 01:19:11.492214 | orchestrator | 2026-03-05 01:19:07 | INFO  | Flavor SCS-1V-2-5 created 2026-03-05 01:19:11.492230 | orchestrator | 2026-03-05 01:19:07 | INFO  | Flavor SCS-1V-4 created 2026-03-05 01:19:11.492244 | orchestrator | 2026-03-05 01:19:07 | INFO  | Flavor SCS-1V-4-10 created 2026-03-05 01:19:11.492261 | orchestrator | 2026-03-05 01:19:07 | INFO  | Flavor SCS-1V-8 created 2026-03-05 01:19:11.492275 | orchestrator | 2026-03-05 01:19:07 | INFO  | Flavor SCS-1V-8-20 created 2026-03-05 01:19:11.492296 | orchestrator | 2026-03-05 01:19:07 | INFO  | Flavor SCS-2V-4 created 2026-03-05 01:19:11.492305 | orchestrator | 2026-03-05 01:19:08 | INFO  | Flavor SCS-2V-4-10 created 2026-03-05 01:19:11.492314 | orchestrator | 2026-03-05 01:19:08 | INFO  | Flavor SCS-2V-8 created 2026-03-05 01:19:11.492323 | orchestrator | 2026-03-05 01:19:08 | INFO  | Flavor SCS-2V-8-20 created 2026-03-05 01:19:11.492331 | orchestrator | 2026-03-05 01:19:08 | INFO  | Flavor SCS-2V-16 created 2026-03-05 01:19:11.492340 | orchestrator | 2026-03-05 01:19:08 | INFO  | Flavor SCS-2V-16-50 created 2026-03-05 01:19:11.492349 | orchestrator | 2026-03-05 01:19:08 | INFO  | Flavor SCS-4V-8 created 2026-03-05 01:19:11.492357 | orchestrator | 2026-03-05 01:19:08 | INFO  | Flavor SCS-4V-8-20 created 2026-03-05 01:19:11.492366 | orchestrator | 2026-03-05 01:19:09 | INFO  | Flavor SCS-4V-16 created 2026-03-05 01:19:11.492375 | orchestrator | 2026-03-05 01:19:09 | INFO  | Flavor SCS-4V-16-50 created 2026-03-05 01:19:11.492383 | orchestrator | 2026-03-05 01:19:09 | INFO  | Flavor SCS-4V-32 created 2026-03-05 01:19:11.492392 | orchestrator | 2026-03-05 01:19:09 | INFO  | Flavor SCS-4V-32-100 created 2026-03-05 01:19:11.492401 | orchestrator | 2026-03-05 01:19:09 | INFO  | Flavor SCS-8V-16 created 2026-03-05 01:19:11.492409 | orchestrator | 2026-03-05 01:19:10 | INFO  | Flavor SCS-8V-16-50 created 2026-03-05 01:19:11.492419 | orchestrator | 2026-03-05 01:19:10 | INFO  | Flavor SCS-8V-32 created 2026-03-05 01:19:11.492427 | orchestrator | 2026-03-05 01:19:10 | INFO  | Flavor SCS-8V-32-100 created 2026-03-05 01:19:11.492437 | orchestrator | 2026-03-05 01:19:10 | INFO  | Flavor SCS-16V-32 created 2026-03-05 01:19:11.492446 | orchestrator | 2026-03-05 01:19:10 | INFO  | Flavor SCS-16V-32-100 created 2026-03-05 01:19:11.492481 | orchestrator | 2026-03-05 01:19:10 | INFO  | Flavor SCS-2V-4-20s created 2026-03-05 01:19:11.492490 | orchestrator | 2026-03-05 01:19:10 | INFO  | Flavor SCS-4V-8-50s created 2026-03-05 01:19:11.492498 | orchestrator | 2026-03-05 01:19:11 | INFO  | Flavor SCS-4V-16-100s created 2026-03-05 01:19:11.492512 | orchestrator | 2026-03-05 01:19:11 | INFO  | Flavor SCS-8V-32-100s created 2026-03-05 01:19:14.118301 | orchestrator | 2026-03-05 01:19:14 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-05 01:19:14.128225 | orchestrator | 2026-03-05 01:19:14 | INFO  | Prepare task for execution of bootstrap-basic. 2026-03-05 01:19:14.205617 | orchestrator | 2026-03-05 01:19:14 | INFO  | Task 3dac40d5-fb3f-4c58-8994-7dfdf6eddab4 (bootstrap-basic) was prepared for execution. 2026-03-05 01:19:14.205719 | orchestrator | 2026-03-05 01:19:14 | INFO  | It takes a moment until task 3dac40d5-fb3f-4c58-8994-7dfdf6eddab4 (bootstrap-basic) has been started and output is visible here. 2026-03-05 01:20:05.891372 | orchestrator | 2026-03-05 01:20:05.891560 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-05 01:20:05.891590 | orchestrator | 2026-03-05 01:20:05.891610 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-05 01:20:05.891628 | orchestrator | Thursday 05 March 2026 01:19:18 +0000 (0:00:00.084) 0:00:00.084 ******** 2026-03-05 01:20:05.891647 | orchestrator | ok: [localhost] 2026-03-05 01:20:05.891668 | orchestrator | 2026-03-05 01:20:05.891703 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-05 01:20:05.891724 | orchestrator | Thursday 05 March 2026 01:19:21 +0000 (0:00:02.096) 0:00:02.181 ******** 2026-03-05 01:20:05.891748 | orchestrator | ok: [localhost] 2026-03-05 01:20:05.891769 | orchestrator | 2026-03-05 01:20:05.891790 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-05 01:20:05.891810 | orchestrator | Thursday 05 March 2026 01:19:32 +0000 (0:00:11.661) 0:00:13.842 ******** 2026-03-05 01:20:05.891831 | orchestrator | changed: [localhost] 2026-03-05 01:20:05.891853 | orchestrator | 2026-03-05 01:20:05.891875 | orchestrator | TASK [Create public network] *************************************************** 2026-03-05 01:20:05.891897 | orchestrator | Thursday 05 March 2026 01:19:40 +0000 (0:00:07.948) 0:00:21.791 ******** 2026-03-05 01:20:05.891917 | orchestrator | changed: [localhost] 2026-03-05 01:20:05.891940 | orchestrator | 2026-03-05 01:20:05.891967 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-05 01:20:05.891989 | orchestrator | Thursday 05 March 2026 01:19:46 +0000 (0:00:05.409) 0:00:27.200 ******** 2026-03-05 01:20:05.892002 | orchestrator | changed: [localhost] 2026-03-05 01:20:05.892016 | orchestrator | 2026-03-05 01:20:05.892029 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-05 01:20:05.892042 | orchestrator | Thursday 05 March 2026 01:19:52 +0000 (0:00:06.723) 0:00:33.923 ******** 2026-03-05 01:20:05.892056 | orchestrator | changed: [localhost] 2026-03-05 01:20:05.892069 | orchestrator | 2026-03-05 01:20:05.892083 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-05 01:20:05.892096 | orchestrator | Thursday 05 March 2026 01:19:57 +0000 (0:00:04.592) 0:00:38.515 ******** 2026-03-05 01:20:05.892109 | orchestrator | changed: [localhost] 2026-03-05 01:20:05.892122 | orchestrator | 2026-03-05 01:20:05.892135 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-05 01:20:05.892160 | orchestrator | Thursday 05 March 2026 01:20:01 +0000 (0:00:04.296) 0:00:42.812 ******** 2026-03-05 01:20:05.892174 | orchestrator | ok: [localhost] 2026-03-05 01:20:05.892188 | orchestrator | 2026-03-05 01:20:05.892201 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:20:05.892212 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:20:05.892224 | orchestrator | 2026-03-05 01:20:05.892235 | orchestrator | 2026-03-05 01:20:05.892246 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:20:05.892257 | orchestrator | Thursday 05 March 2026 01:20:05 +0000 (0:00:03.907) 0:00:46.720 ******** 2026-03-05 01:20:05.892268 | orchestrator | =============================================================================== 2026-03-05 01:20:05.892279 | orchestrator | Get volume type LUKS --------------------------------------------------- 11.66s 2026-03-05 01:20:05.892320 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.95s 2026-03-05 01:20:05.892332 | orchestrator | Set public network to default ------------------------------------------- 6.72s 2026-03-05 01:20:05.892342 | orchestrator | Create public network --------------------------------------------------- 5.41s 2026-03-05 01:20:05.892354 | orchestrator | Create public subnet ---------------------------------------------------- 4.59s 2026-03-05 01:20:05.892365 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.30s 2026-03-05 01:20:05.892376 | orchestrator | Create manager role ----------------------------------------------------- 3.91s 2026-03-05 01:20:05.892387 | orchestrator | Gathering Facts --------------------------------------------------------- 2.10s 2026-03-05 01:20:08.572060 | orchestrator | 2026-03-05 01:20:08 | INFO  | It takes a moment until task b3e6f6c5-81bb-4a69-8910-2cb2f80d51a0 (image-manager) has been started and output is visible here. 2026-03-05 01:20:51.283615 | orchestrator | 2026-03-05 01:20:11 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-05 01:20:51.283704 | orchestrator | 2026-03-05 01:20:11 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-05 01:20:51.283712 | orchestrator | 2026-03-05 01:20:11 | INFO  | Importing image Cirros 0.6.2 2026-03-05 01:20:51.283718 | orchestrator | 2026-03-05 01:20:11 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-05 01:20:51.283724 | orchestrator | 2026-03-05 01:20:13 | INFO  | Waiting for image to leave queued state... 2026-03-05 01:20:51.283729 | orchestrator | 2026-03-05 01:20:15 | INFO  | Waiting for import to complete... 2026-03-05 01:20:51.283734 | orchestrator | 2026-03-05 01:20:25 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-05 01:20:51.283740 | orchestrator | 2026-03-05 01:20:26 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-05 01:20:51.283745 | orchestrator | 2026-03-05 01:20:26 | INFO  | Setting internal_version = 0.6.2 2026-03-05 01:20:51.283750 | orchestrator | 2026-03-05 01:20:26 | INFO  | Setting image_original_user = cirros 2026-03-05 01:20:51.283756 | orchestrator | 2026-03-05 01:20:26 | INFO  | Adding tag os:cirros 2026-03-05 01:20:51.283761 | orchestrator | 2026-03-05 01:20:26 | INFO  | Setting property architecture: x86_64 2026-03-05 01:20:51.283766 | orchestrator | 2026-03-05 01:20:26 | INFO  | Setting property hw_disk_bus: scsi 2026-03-05 01:20:51.283771 | orchestrator | 2026-03-05 01:20:27 | INFO  | Setting property hw_rng_model: virtio 2026-03-05 01:20:51.283776 | orchestrator | 2026-03-05 01:20:27 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-05 01:20:51.283782 | orchestrator | 2026-03-05 01:20:27 | INFO  | Setting property hw_watchdog_action: reset 2026-03-05 01:20:51.283787 | orchestrator | 2026-03-05 01:20:28 | INFO  | Setting property hypervisor_type: qemu 2026-03-05 01:20:51.283796 | orchestrator | 2026-03-05 01:20:28 | INFO  | Setting property os_distro: cirros 2026-03-05 01:20:51.283801 | orchestrator | 2026-03-05 01:20:28 | INFO  | Setting property os_purpose: minimal 2026-03-05 01:20:51.283807 | orchestrator | 2026-03-05 01:20:28 | INFO  | Setting property replace_frequency: never 2026-03-05 01:20:51.283812 | orchestrator | 2026-03-05 01:20:29 | INFO  | Setting property uuid_validity: none 2026-03-05 01:20:51.283817 | orchestrator | 2026-03-05 01:20:29 | INFO  | Setting property provided_until: none 2026-03-05 01:20:51.283822 | orchestrator | 2026-03-05 01:20:29 | INFO  | Setting property image_description: Cirros 2026-03-05 01:20:51.283827 | orchestrator | 2026-03-05 01:20:29 | INFO  | Setting property image_name: Cirros 2026-03-05 01:20:51.283842 | orchestrator | 2026-03-05 01:20:30 | INFO  | Setting property internal_version: 0.6.2 2026-03-05 01:20:51.283847 | orchestrator | 2026-03-05 01:20:30 | INFO  | Setting property image_original_user: cirros 2026-03-05 01:20:51.283852 | orchestrator | 2026-03-05 01:20:30 | INFO  | Setting property os_version: 0.6.2 2026-03-05 01:20:51.283858 | orchestrator | 2026-03-05 01:20:30 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-05 01:20:51.283864 | orchestrator | 2026-03-05 01:20:31 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-05 01:20:51.283869 | orchestrator | 2026-03-05 01:20:31 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-05 01:20:51.283874 | orchestrator | 2026-03-05 01:20:31 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-05 01:20:51.283882 | orchestrator | 2026-03-05 01:20:31 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-05 01:20:51.283887 | orchestrator | 2026-03-05 01:20:31 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-05 01:20:51.283892 | orchestrator | 2026-03-05 01:20:31 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-05 01:20:51.283897 | orchestrator | 2026-03-05 01:20:31 | INFO  | Importing image Cirros 0.6.3 2026-03-05 01:20:51.283902 | orchestrator | 2026-03-05 01:20:31 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-05 01:20:51.283907 | orchestrator | 2026-03-05 01:20:32 | INFO  | Waiting for image to leave queued state... 2026-03-05 01:20:51.283912 | orchestrator | 2026-03-05 01:20:34 | INFO  | Waiting for import to complete... 2026-03-05 01:20:51.283926 | orchestrator | 2026-03-05 01:20:44 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-05 01:20:51.283931 | orchestrator | 2026-03-05 01:20:44 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-05 01:20:51.283937 | orchestrator | 2026-03-05 01:20:44 | INFO  | Setting internal_version = 0.6.3 2026-03-05 01:20:51.283941 | orchestrator | 2026-03-05 01:20:44 | INFO  | Setting image_original_user = cirros 2026-03-05 01:20:51.283946 | orchestrator | 2026-03-05 01:20:44 | INFO  | Adding tag os:cirros 2026-03-05 01:20:51.283951 | orchestrator | 2026-03-05 01:20:45 | INFO  | Setting property architecture: x86_64 2026-03-05 01:20:51.283956 | orchestrator | 2026-03-05 01:20:45 | INFO  | Setting property hw_disk_bus: scsi 2026-03-05 01:20:51.283961 | orchestrator | 2026-03-05 01:20:45 | INFO  | Setting property hw_rng_model: virtio 2026-03-05 01:20:51.283966 | orchestrator | 2026-03-05 01:20:45 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-05 01:20:51.283972 | orchestrator | 2026-03-05 01:20:46 | INFO  | Setting property hw_watchdog_action: reset 2026-03-05 01:20:51.283977 | orchestrator | 2026-03-05 01:20:46 | INFO  | Setting property hypervisor_type: qemu 2026-03-05 01:20:51.283982 | orchestrator | 2026-03-05 01:20:46 | INFO  | Setting property os_distro: cirros 2026-03-05 01:20:51.283987 | orchestrator | 2026-03-05 01:20:47 | INFO  | Setting property os_purpose: minimal 2026-03-05 01:20:51.283992 | orchestrator | 2026-03-05 01:20:47 | INFO  | Setting property replace_frequency: never 2026-03-05 01:20:51.283997 | orchestrator | 2026-03-05 01:20:47 | INFO  | Setting property uuid_validity: none 2026-03-05 01:20:51.284002 | orchestrator | 2026-03-05 01:20:47 | INFO  | Setting property provided_until: none 2026-03-05 01:20:51.284007 | orchestrator | 2026-03-05 01:20:48 | INFO  | Setting property image_description: Cirros 2026-03-05 01:20:51.284015 | orchestrator | 2026-03-05 01:20:48 | INFO  | Setting property image_name: Cirros 2026-03-05 01:20:51.284021 | orchestrator | 2026-03-05 01:20:48 | INFO  | Setting property internal_version: 0.6.3 2026-03-05 01:20:51.284026 | orchestrator | 2026-03-05 01:20:49 | INFO  | Setting property image_original_user: cirros 2026-03-05 01:20:51.284031 | orchestrator | 2026-03-05 01:20:49 | INFO  | Setting property os_version: 0.6.3 2026-03-05 01:20:51.284036 | orchestrator | 2026-03-05 01:20:49 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-05 01:20:51.284041 | orchestrator | 2026-03-05 01:20:50 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-05 01:20:51.284046 | orchestrator | 2026-03-05 01:20:50 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-05 01:20:51.284051 | orchestrator | 2026-03-05 01:20:50 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-05 01:20:51.284056 | orchestrator | 2026-03-05 01:20:50 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-05 01:20:51.675854 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-05 01:21:52.625902 | orchestrator | 2026-03-05 01:21:52 | INFO  | date: 2026-03-04 2026-03-05 01:21:52.625979 | orchestrator | 2026-03-05 01:21:52 | INFO  | image: octavia-amphora-haproxy-2024.2.20260304.qcow2 2026-03-05 01:21:52.626002 | orchestrator | 2026-03-05 01:21:52 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260304.qcow2 2026-03-05 01:21:52.626009 | orchestrator | 2026-03-05 01:21:52 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260304.qcow2.CHECKSUM 2026-03-05 01:21:52.789315 | orchestrator | 2026-03-05 01:21:52 | INFO  | checksum: localhost | ok: "/var/lib/zuul/builds/1839e368fcb149ffb676fb798204165f/work/logs" 2026-03-05 01:22:25.950958 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/1839e368fcb149ffb676fb798204165f/work/artifacts" 2026-03-05 01:22:26.213459 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/1839e368fcb149ffb676fb798204165f/work/docs" 2026-03-05 01:22:26.234653 | 2026-03-05 01:22:26.234804 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-05 01:22:27.165013 | orchestrator | changed: .d..t...... ./ 2026-03-05 01:22:27.165500 | orchestrator | changed: All items complete 2026-03-05 01:22:27.165580 | 2026-03-05 01:22:27.870696 | orchestrator | changed: .d..t...... ./ 2026-03-05 01:22:28.614533 | orchestrator | changed: .d..t...... ./ 2026-03-05 01:22:28.635237 | 2026-03-05 01:22:28.635396 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-05 01:22:28.669111 | orchestrator | skipping: Conditional result was False 2026-03-05 01:22:28.676006 | orchestrator | skipping: Conditional result was False 2026-03-05 01:22:28.690809 | 2026-03-05 01:22:28.690940 | PLAY RECAP 2026-03-05 01:22:28.691004 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-05 01:22:28.691037 | 2026-03-05 01:22:28.828386 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-05 01:22:28.831098 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-05 01:22:29.570397 | 2026-03-05 01:22:29.570564 | PLAY [Base post] 2026-03-05 01:22:29.585081 | 2026-03-05 01:22:29.585212 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-05 01:22:30.644037 | orchestrator | changed 2026-03-05 01:22:30.652953 | 2026-03-05 01:22:30.653084 | PLAY RECAP 2026-03-05 01:22:30.653149 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-05 01:22:30.653215 | 2026-03-05 01:22:30.787556 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-05 01:22:30.788716 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-05 01:22:31.573575 | 2026-03-05 01:22:31.573751 | PLAY [Base post-logs] 2026-03-05 01:22:31.584688 | 2026-03-05 01:22:31.584833 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-05 01:22:32.030501 | localhost | changed 2026-03-05 01:22:32.044945 | 2026-03-05 01:22:32.045117 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-05 01:22:32.082578 | localhost | ok 2026-03-05 01:22:32.087874 | 2026-03-05 01:22:32.088021 | TASK [Set zuul-log-path fact] 2026-03-05 01:22:32.104683 | localhost | ok 2026-03-05 01:22:32.116864 | 2026-03-05 01:22:32.116984 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-05 01:22:32.153781 | localhost | ok 2026-03-05 01:22:32.160724 | 2026-03-05 01:22:32.160896 | TASK [upload-logs : Create log directories] 2026-03-05 01:22:32.669948 | localhost | changed 2026-03-05 01:22:32.675487 | 2026-03-05 01:22:32.675650 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-05 01:22:33.197684 | localhost -> localhost | ok: Runtime: 0:00:00.007142 2026-03-05 01:22:33.203832 | 2026-03-05 01:22:33.203972 | TASK [upload-logs : Upload logs to log server] 2026-03-05 01:22:33.771949 | localhost | Output suppressed because no_log was given 2026-03-05 01:22:33.776167 | 2026-03-05 01:22:33.776378 | LOOP [upload-logs : Compress console log and json output] 2026-03-05 01:22:33.840199 | localhost | skipping: Conditional result was False 2026-03-05 01:22:33.846671 | localhost | skipping: Conditional result was False 2026-03-05 01:22:33.858808 | 2026-03-05 01:22:33.859113 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-05 01:22:33.928226 | localhost | skipping: Conditional result was False 2026-03-05 01:22:33.928808 | 2026-03-05 01:22:33.930492 | localhost | skipping: Conditional result was False 2026-03-05 01:22:33.936103 | 2026-03-05 01:22:33.936283 | LOOP [upload-logs : Upload console log and json output]